00:00:00.001 Started by upstream project "autotest-per-patch" build number 132780 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.101 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.182 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.269 Using shallow fetch with depth 1 00:00:00.269 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.269 > git --version # timeout=10 00:00:00.336 > git --version # 'git version 2.39.2' 00:00:00.336 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.372 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.372 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.984 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.999 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.040 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.040 > git config core.sparsecheckout # timeout=10 00:00:06.054 > git read-tree -mu HEAD # timeout=10 00:00:06.071 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.095 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.095 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.189 [Pipeline] Start of Pipeline 00:00:06.207 [Pipeline] library 00:00:06.209 Loading library shm_lib@master 00:00:06.209 Library shm_lib@master is cached. Copying from home. 00:00:06.229 [Pipeline] node 00:32:36.964 Resuming build at Mon Dec 09 09:44:44 UTC 2024 after Jenkins restart 00:32:36.968 Ready to run at Mon Dec 09 09:44:44 UTC 2024 00:32:46.377 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:32:46.405 [Pipeline] { 00:32:46.432 [Pipeline] catchError 00:32:46.492 [Pipeline] { 00:32:46.539 [Pipeline] wrap 00:32:46.574 [Pipeline] { 00:32:46.602 [Pipeline] stage 00:32:46.606 [Pipeline] { (Prologue) 00:32:46.723 [Pipeline] echo 00:32:46.727 Node: VM-host-SM9 00:32:46.745 [Pipeline] cleanWs 00:32:47.148 [WS-CLEANUP] Deleting project workspace... 00:32:47.148 [WS-CLEANUP] Deferred wipeout is used... 00:32:47.160 [WS-CLEANUP] done 00:32:47.900 [Pipeline] setCustomBuildProperty 00:32:48.012 [Pipeline] httpRequest 00:32:52.104 [Pipeline] echo 00:32:52.107 Sorcerer 10.211.164.101 is dead 00:32:52.118 [Pipeline] httpRequest 00:32:52.179 [Pipeline] echo 00:32:52.181 Sorcerer 10.211.164.96 is dead 00:32:52.196 [Pipeline] httpRequest 00:32:55.213 [Pipeline] echo 00:32:55.215 Sorcerer 10.211.164.20 is dead 00:32:55.225 [Pipeline] httpRequest 00:32:58.255 [Pipeline] echo 00:32:58.257 Sorcerer 10.211.164.23 is dead 00:32:58.265 [Pipeline] httpRequest 00:32:59.672 [Pipeline] echo 00:32:59.673 Sorcerer 10.211.164.112 is alive 00:32:59.683 [Pipeline] retry 00:32:59.684 [Pipeline] { 00:32:59.698 [Pipeline] httpRequest 00:32:59.722 HttpMethod: GET 00:32:59.723 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:32:59.727 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:32:59.732 Response Code: HTTP/1.1 404 Not Found 00:32:59.733 Success: Status code 404 is in the accepted range: 200,404 00:32:59.734 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:32:59.736 [Pipeline] } 00:32:59.753 [Pipeline] // retry 00:32:59.760 [Pipeline] sh 00:33:00.049 + rm -f jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:00.065 [Pipeline] retry 00:33:00.066 [Pipeline] { 00:33:00.077 [Pipeline] checkout 00:33:00.082 The recommended git tool is: git 00:33:00.566 using credential 00000000-0000-0000-0000-000000000002 00:33:00.571 Wiping out workspace first. 00:33:00.580 Cloning the remote Git repository 00:33:00.584 Honoring refspec on initial clone 00:33:00.609 Cloning repository https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:33:00.630 > git init /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp # timeout=10 00:33:00.687 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:33:00.687 > git --version # timeout=10 00:33:00.694 > git --version # 'git version 2.25.1' 00:33:00.695 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:33:00.703 Setting http proxy: proxy-dmz.intel.com:911 00:33:00.703 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=10 00:33:22.505 Avoid second fetch 00:33:22.535 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:33:22.841 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:33:22.845 [Pipeline] } 00:33:22.862 [Pipeline] // retry 00:33:22.871 [Pipeline] sh 00:33:23.155 + hash pigz 00:33:23.155 + tar -czf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz jbp 00:33:22.103 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:33:22.108 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:33:22.506 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:33:22.524 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:33:22.539 > git config core.sparsecheckout # timeout=10 00:33:22.544 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:33:24.108 [Pipeline] retry 00:33:24.111 [Pipeline] { 00:33:24.129 [Pipeline] httpRequest 00:33:24.137 HttpMethod: PUT 00:33:24.137 URL: http://10.211.164.112/cgi-bin/sorcerer.py?group=packages&filename=jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:24.146 Sending request to url: http://10.211.164.112/cgi-bin/sorcerer.py?group=packages&filename=jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:24.353 Response Code: HTTP/1.1 200 OK 00:33:24.363 Success: Status code 200 is in the accepted range: 200 00:33:24.366 [Pipeline] } 00:33:24.384 [Pipeline] // retry 00:33:24.394 [Pipeline] echo 00:33:24.396 00:33:24.396 Locking 00:33:24.396 Waited 0s for lock 00:33:24.396 Everything Fine. Saved: /storage/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:33:24.396 00:33:24.411 [Pipeline] httpRequest 00:33:24.802 [Pipeline] echo 00:33:24.804 Sorcerer 10.211.164.112 is alive 00:33:24.814 [Pipeline] retry 00:33:24.816 [Pipeline] { 00:33:24.830 [Pipeline] httpRequest 00:33:24.835 HttpMethod: GET 00:33:24.836 URL: http://10.211.164.112/packages/spdk_070cd5283019e08421167abc12b7c611b5f9ba18.tar.gz 00:33:24.836 Sending request to url: http://10.211.164.112/packages/spdk_070cd5283019e08421167abc12b7c611b5f9ba18.tar.gz 00:33:24.839 Response Code: HTTP/1.1 404 Not Found 00:33:24.840 Success: Status code 404 is in the accepted range: 200,404 00:33:24.840 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_070cd5283019e08421167abc12b7c611b5f9ba18.tar.gz 00:33:24.844 [Pipeline] } 00:33:24.862 [Pipeline] // retry 00:33:24.871 [Pipeline] sh 00:33:25.156 + rm -f spdk_070cd5283019e08421167abc12b7c611b5f9ba18.tar.gz 00:33:25.177 [Pipeline] retry 00:33:25.180 [Pipeline] { 00:33:25.210 [Pipeline] checkout 00:33:25.221 The recommended git tool is: NONE 00:33:25.232 using credential 00000000-0000-0000-0000-000000000002 00:33:25.235 Wiping out workspace first. 00:33:25.243 Cloning the remote Git repository 00:33:25.246 Honoring refspec on initial clone 00:33:25.248 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:33:25.248 > git init /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk # timeout=10 00:33:25.259 Using reference repository: /var/ci_repos/spdk_multi 00:33:25.259 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:33:25.259 > git --version # timeout=10 00:33:25.262 > git --version # 'git version 2.25.1' 00:33:25.262 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:33:25.265 Setting http proxy: proxy-dmz.intel.com:911 00:33:25.265 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/96/25496/8 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:34:19.552 Avoid second fetch 00:34:19.568 Checking out Revision 070cd5283019e08421167abc12b7c611b5f9ba18 (FETCH_HEAD) 00:34:19.530 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:34:19.535 > git config --add remote.origin.fetch refs/changes/96/25496/8 # timeout=10 00:34:19.539 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:34:19.552 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:34:19.561 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:34:19.568 > git config core.sparsecheckout # timeout=10 00:34:19.573 > git checkout -f 070cd5283019e08421167abc12b7c611b5f9ba18 # timeout=10 00:34:19.893 Commit message: "env: extend the page table to support 4-KiB mapping" 00:34:19.902 First time build. Skipping changelog. 00:34:19.895 > git rev-list --no-walk 961c68b08c66ab95493bd99b2eb21fd28b63039e # timeout=10 00:34:19.905 > git remote # timeout=10 00:34:19.908 > git submodule init # timeout=10 00:34:19.964 > git submodule sync # timeout=10 00:34:20.015 > git config --get remote.origin.url # timeout=10 00:34:20.024 > git submodule init # timeout=10 00:34:20.071 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:34:20.077 > git config --get submodule.dpdk.url # timeout=10 00:34:20.081 > git remote # timeout=10 00:34:20.086 > git config --get remote.origin.url # timeout=10 00:34:20.090 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:34:20.095 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:34:20.099 > git remote # timeout=10 00:34:20.104 > git config --get remote.origin.url # timeout=10 00:34:20.108 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:34:20.111 > git config --get submodule.isa-l.url # timeout=10 00:34:20.116 > git remote # timeout=10 00:34:20.119 > git config --get remote.origin.url # timeout=10 00:34:20.122 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:34:20.126 > git config --get submodule.ocf.url # timeout=10 00:34:20.130 > git remote # timeout=10 00:34:20.134 > git config --get remote.origin.url # timeout=10 00:34:20.138 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:34:20.141 > git config --get submodule.libvfio-user.url # timeout=10 00:34:20.145 > git remote # timeout=10 00:34:20.149 > git config --get remote.origin.url # timeout=10 00:34:20.153 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:34:20.156 > git config --get submodule.xnvme.url # timeout=10 00:34:20.160 > git remote # timeout=10 00:34:20.164 > git config --get remote.origin.url # timeout=10 00:34:20.168 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:34:20.172 > git config --get submodule.isa-l-crypto.url # timeout=10 00:34:20.176 > git remote # timeout=10 00:34:20.180 > git config --get remote.origin.url # timeout=10 00:34:20.184 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:34:20.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:20.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:20.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:20.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:20.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:20.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:20.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:34:20.206 Setting http proxy: proxy-dmz.intel.com:911 00:34:20.206 Setting http proxy: proxy-dmz.intel.com:911 00:34:20.206 Setting http proxy: proxy-dmz.intel.com:911 00:34:20.206 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:34:20.206 Setting http proxy: proxy-dmz.intel.com:911 00:34:20.206 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:34:20.206 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:34:20.206 Setting http proxy: proxy-dmz.intel.com:911 00:34:20.207 Setting http proxy: proxy-dmz.intel.com:911 00:34:20.207 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:34:20.207 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:34:20.207 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:34:20.207 Setting http proxy: proxy-dmz.intel.com:911 00:34:20.207 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:34:46.926 [Pipeline] dir 00:34:46.927 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:34:46.929 [Pipeline] { 00:34:46.945 [Pipeline] sh 00:34:47.230 ++ nproc 00:34:47.230 + threads=40 00:34:47.230 + git repack -a -d --threads=40 00:34:53.797 + git submodule foreach git repack -a -d --threads=40 00:34:53.797 Entering 'dpdk' 00:34:57.087 Entering 'intel-ipsec-mb' 00:34:57.087 Entering 'isa-l' 00:34:57.347 Entering 'isa-l-crypto' 00:34:57.347 Entering 'libvfio-user' 00:34:57.607 Entering 'ocf' 00:34:57.866 Entering 'xnvme' 00:34:58.125 + find .git -type f -name alternates -print -delete 00:34:58.125 .git/objects/info/alternates 00:34:58.125 .git/modules/isa-l/objects/info/alternates 00:34:58.125 .git/modules/libvfio-user/objects/info/alternates 00:34:58.125 .git/modules/isa-l-crypto/objects/info/alternates 00:34:58.125 .git/modules/dpdk/objects/info/alternates 00:34:58.125 .git/modules/intel-ipsec-mb/objects/info/alternates 00:34:58.125 .git/modules/xnvme/objects/info/alternates 00:34:58.125 .git/modules/ocf/objects/info/alternates 00:34:58.138 [Pipeline] } 00:34:58.159 [Pipeline] // dir 00:34:58.166 [Pipeline] } 00:34:58.185 [Pipeline] // retry 00:34:58.193 [Pipeline] sh 00:34:58.476 + hash pigz 00:34:58.476 + tar -czf spdk_070cd5283019e08421167abc12b7c611b5f9ba18.tar.gz spdk 00:35:13.380 [Pipeline] retry 00:35:13.382 [Pipeline] { 00:35:13.400 [Pipeline] httpRequest 00:35:13.408 HttpMethod: PUT 00:35:13.409 URL: http://10.211.164.112/cgi-bin/sorcerer.py?group=packages&filename=spdk_070cd5283019e08421167abc12b7c611b5f9ba18.tar.gz 00:35:13.410 Sending request to url: http://10.211.164.112/cgi-bin/sorcerer.py?group=packages&filename=spdk_070cd5283019e08421167abc12b7c611b5f9ba18.tar.gz 00:35:15.885 Response Code: HTTP/1.1 200 OK 00:35:15.890 Success: Status code 200 is in the accepted range: 200 00:35:15.893 [Pipeline] } 00:35:15.911 [Pipeline] // retry 00:35:15.921 [Pipeline] echo 00:35:15.922 00:35:15.922 Locking 00:35:15.922 Waited 0s for lock 00:35:15.922 Everything Fine. Saved: /storage/packages/spdk_070cd5283019e08421167abc12b7c611b5f9ba18.tar.gz 00:35:15.922 00:35:15.926 [Pipeline] sh 00:35:16.207 + git -C spdk log --oneline -n5 00:35:16.207 070cd5283 env: extend the page table to support 4-KiB mapping 00:35:16.207 6c714c5fe env: add mem_map_fini and vtophys_fini for cleanup 00:35:16.207 b7d7c4b24 env: handle possible DPDK errors in mem_map_init 00:35:16.207 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:35:16.207 496bfd677 env: match legacy mem mode config with DPDK 00:35:16.231 [Pipeline] writeFile 00:35:16.249 [Pipeline] sh 00:35:16.534 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:35:16.548 [Pipeline] sh 00:35:16.834 + cat autorun-spdk.conf 00:35:16.835 SPDK_RUN_FUNCTIONAL_TEST=1 00:35:16.835 SPDK_TEST_NVMF=1 00:35:16.835 SPDK_TEST_NVMF_TRANSPORT=tcp 00:35:16.835 SPDK_TEST_URING=1 00:35:16.835 SPDK_TEST_USDT=1 00:35:16.835 SPDK_RUN_UBSAN=1 00:35:16.835 NET_TYPE=virt 00:35:16.835 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:35:16.842 RUN_NIGHTLY=0 00:35:16.844 [Pipeline] } 00:35:16.860 [Pipeline] // stage 00:35:16.879 [Pipeline] stage 00:35:16.881 [Pipeline] { (Run VM) 00:35:16.896 [Pipeline] sh 00:35:17.180 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:35:17.180 + echo 'Start stage prepare_nvme.sh' 00:35:17.180 Start stage prepare_nvme.sh 00:35:17.180 + [[ -n 0 ]] 00:35:17.180 + disk_prefix=ex0 00:35:17.180 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:35:17.180 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:35:17.180 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:35:17.180 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:35:17.180 ++ SPDK_TEST_NVMF=1 00:35:17.180 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:35:17.180 ++ SPDK_TEST_URING=1 00:35:17.180 ++ SPDK_TEST_USDT=1 00:35:17.180 ++ SPDK_RUN_UBSAN=1 00:35:17.180 ++ NET_TYPE=virt 00:35:17.180 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:35:17.180 ++ RUN_NIGHTLY=0 00:35:17.180 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:35:17.180 + nvme_files=() 00:35:17.180 + declare -A nvme_files 00:35:17.180 + backend_dir=/var/lib/libvirt/images/backends 00:35:17.180 + nvme_files['nvme.img']=5G 00:35:17.180 + nvme_files['nvme-cmb.img']=5G 00:35:17.180 + nvme_files['nvme-multi0.img']=4G 00:35:17.180 + nvme_files['nvme-multi1.img']=4G 00:35:17.180 + nvme_files['nvme-multi2.img']=4G 00:35:17.180 + nvme_files['nvme-openstack.img']=8G 00:35:17.180 + nvme_files['nvme-zns.img']=5G 00:35:17.180 + (( SPDK_TEST_NVME_PMR == 1 )) 00:35:17.180 + (( SPDK_TEST_FTL == 1 )) 00:35:17.180 + (( SPDK_TEST_NVME_FDP == 1 )) 00:35:17.180 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:35:17.180 + for nvme in "${!nvme_files[@]}" 00:35:17.180 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:35:17.180 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:35:17.180 + for nvme in "${!nvme_files[@]}" 00:35:17.180 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:35:17.180 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:35:17.180 + for nvme in "${!nvme_files[@]}" 00:35:17.180 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:35:17.180 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:35:17.180 + for nvme in "${!nvme_files[@]}" 00:35:17.180 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:35:17.180 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:35:17.180 + for nvme in "${!nvme_files[@]}" 00:35:17.180 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:35:17.181 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:35:17.181 + for nvme in "${!nvme_files[@]}" 00:35:17.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:35:17.181 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:35:17.181 + for nvme in "${!nvme_files[@]}" 00:35:17.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:35:17.441 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:35:17.441 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:35:17.441 + echo 'End stage prepare_nvme.sh' 00:35:17.441 End stage prepare_nvme.sh 00:35:17.453 [Pipeline] sh 00:35:17.737 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:35:17.738 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:35:17.738 00:35:17.738 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:35:17.738 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:35:17.738 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:35:17.738 HELP=0 00:35:17.738 DRY_RUN=0 00:35:17.738 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:35:17.738 NVME_DISKS_TYPE=nvme,nvme, 00:35:17.738 NVME_AUTO_CREATE=0 00:35:17.738 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:35:17.738 NVME_CMB=,, 00:35:17.738 NVME_PMR=,, 00:35:17.738 NVME_ZNS=,, 00:35:17.738 NVME_MS=,, 00:35:17.738 NVME_FDP=,, 00:35:17.738 SPDK_VAGRANT_DISTRO=fedora39 00:35:17.738 SPDK_VAGRANT_VMCPU=10 00:35:17.738 SPDK_VAGRANT_VMRAM=12288 00:35:17.738 SPDK_VAGRANT_PROVIDER=libvirt 00:35:17.738 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:35:17.738 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:35:17.738 SPDK_OPENSTACK_NETWORK=0 00:35:17.738 VAGRANT_PACKAGE_BOX=0 00:35:17.738 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:35:17.738 FORCE_DISTRO=true 00:35:17.738 VAGRANT_BOX_VERSION= 00:35:17.738 EXTRA_VAGRANTFILES= 00:35:17.738 NIC_MODEL=e1000 00:35:17.738 00:35:17.738 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:35:17.738 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:35:21.029 Bringing machine 'default' up with 'libvirt' provider... 00:35:21.288 ==> default: Creating image (snapshot of base box volume). 00:35:21.548 ==> default: Creating domain with the following settings... 00:35:21.548 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733737648_a8502cf632e9fa8db3b9 00:35:21.548 ==> default: -- Domain type: kvm 00:35:21.548 ==> default: -- Cpus: 10 00:35:21.548 ==> default: -- Feature: acpi 00:35:21.548 ==> default: -- Feature: apic 00:35:21.548 ==> default: -- Feature: pae 00:35:21.548 ==> default: -- Memory: 12288M 00:35:21.548 ==> default: -- Memory Backing: hugepages: 00:35:21.548 ==> default: -- Management MAC: 00:35:21.548 ==> default: -- Loader: 00:35:21.548 ==> default: -- Nvram: 00:35:21.548 ==> default: -- Base box: spdk/fedora39 00:35:21.548 ==> default: -- Storage pool: default 00:35:21.548 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733737648_a8502cf632e9fa8db3b9.img (20G) 00:35:21.548 ==> default: -- Volume Cache: default 00:35:21.548 ==> default: -- Kernel: 00:35:21.548 ==> default: -- Initrd: 00:35:21.548 ==> default: -- Graphics Type: vnc 00:35:21.548 ==> default: -- Graphics Port: -1 00:35:21.548 ==> default: -- Graphics IP: 127.0.0.1 00:35:21.548 ==> default: -- Graphics Password: Not defined 00:35:21.548 ==> default: -- Video Type: cirrus 00:35:21.548 ==> default: -- Video VRAM: 9216 00:35:21.548 ==> default: -- Sound Type: 00:35:21.548 ==> default: -- Keymap: en-us 00:35:21.548 ==> default: -- TPM Path: 00:35:21.548 ==> default: -- INPUT: type=mouse, bus=ps2 00:35:21.548 ==> default: -- Command line args: 00:35:21.548 ==> default: -> value=-device, 00:35:21.548 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:35:21.548 ==> default: -> value=-drive, 00:35:21.548 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:35:21.548 ==> default: -> value=-device, 00:35:21.548 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:35:21.548 ==> default: -> value=-device, 00:35:21.548 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:35:21.548 ==> default: -> value=-drive, 00:35:21.548 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:35:21.548 ==> default: -> value=-device, 00:35:21.548 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:35:21.548 ==> default: -> value=-drive, 00:35:21.548 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:35:21.548 ==> default: -> value=-device, 00:35:21.548 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:35:21.548 ==> default: -> value=-drive, 00:35:21.548 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:35:21.548 ==> default: -> value=-device, 00:35:21.548 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:35:21.548 ==> default: Creating shared folders metadata... 00:35:21.548 ==> default: Starting domain. 00:35:22.929 ==> default: Waiting for domain to get an IP address... 00:35:41.013 ==> default: Waiting for SSH to become available... 00:35:41.013 ==> default: Configuring and enabling network interfaces... 00:35:43.550 default: SSH address: 192.168.121.139:22 00:35:43.550 default: SSH username: vagrant 00:35:43.550 default: SSH auth method: private key 00:35:45.453 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:35:53.572 ==> default: Mounting SSHFS shared folder... 00:35:54.140 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:35:54.140 ==> default: Checking Mount.. 00:35:55.519 ==> default: Folder Successfully Mounted! 00:35:55.519 ==> default: Running provisioner: file... 00:35:56.086 default: ~/.gitconfig => .gitconfig 00:35:56.653 00:35:56.653 SUCCESS! 00:35:56.653 00:35:56.653 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:35:56.653 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:35:56.653 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:35:56.653 00:35:56.661 [Pipeline] } 00:35:56.676 [Pipeline] // stage 00:35:56.685 [Pipeline] dir 00:35:56.686 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:35:56.687 [Pipeline] { 00:35:56.699 [Pipeline] catchError 00:35:56.701 [Pipeline] { 00:35:56.713 [Pipeline] sh 00:35:56.992 + vagrant ssh-config --host vagrant 00:35:56.992 + sed -ne /^Host/,$p 00:35:56.992 + tee ssh_conf 00:36:01.178 Host vagrant 00:36:01.178 HostName 192.168.121.139 00:36:01.178 User vagrant 00:36:01.178 Port 22 00:36:01.178 UserKnownHostsFile /dev/null 00:36:01.178 StrictHostKeyChecking no 00:36:01.178 PasswordAuthentication no 00:36:01.178 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:36:01.178 IdentitiesOnly yes 00:36:01.178 LogLevel FATAL 00:36:01.178 ForwardAgent yes 00:36:01.178 ForwardX11 yes 00:36:01.178 00:36:01.192 [Pipeline] withEnv 00:36:01.194 [Pipeline] { 00:36:01.207 [Pipeline] sh 00:36:01.487 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:36:01.487 source /etc/os-release 00:36:01.487 [[ -e /image.version ]] && img=$(< /image.version) 00:36:01.487 # Minimal, systemd-like check. 00:36:01.487 if [[ -e /.dockerenv ]]; then 00:36:01.487 # Clear garbage from the node's name: 00:36:01.487 # agt-er_autotest_547-896 -> autotest_547-896 00:36:01.487 # $HOSTNAME is the actual container id 00:36:01.487 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:36:01.487 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:36:01.487 # We can assume this is a mount from a host where container is running, 00:36:01.487 # so fetch its hostname to easily identify the target swarm worker. 00:36:01.487 container="$(< /etc/hostname) ($agent)" 00:36:01.487 else 00:36:01.487 # Fallback 00:36:01.487 container=$agent 00:36:01.487 fi 00:36:01.487 fi 00:36:01.487 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:36:01.487 00:36:01.497 [Pipeline] } 00:36:01.512 [Pipeline] // withEnv 00:36:01.523 [Pipeline] setCustomBuildProperty 00:36:01.541 [Pipeline] stage 00:36:01.543 [Pipeline] { (Tests) 00:36:01.558 [Pipeline] sh 00:36:01.836 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:36:02.107 [Pipeline] sh 00:36:02.386 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:36:02.658 [Pipeline] timeout 00:36:02.659 Timeout set to expire in 1 hr 0 min 00:36:02.660 [Pipeline] { 00:36:02.675 [Pipeline] sh 00:36:02.954 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:36:03.521 HEAD is now at 070cd5283 env: extend the page table to support 4-KiB mapping 00:36:03.532 [Pipeline] sh 00:36:03.812 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:36:04.084 [Pipeline] sh 00:36:04.364 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:36:04.639 [Pipeline] sh 00:36:04.921 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:36:05.180 ++ readlink -f spdk_repo 00:36:05.180 + DIR_ROOT=/home/vagrant/spdk_repo 00:36:05.180 + [[ -n /home/vagrant/spdk_repo ]] 00:36:05.180 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:36:05.180 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:36:05.180 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:36:05.180 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:36:05.180 + [[ -d /home/vagrant/spdk_repo/output ]] 00:36:05.180 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:36:05.180 + cd /home/vagrant/spdk_repo 00:36:05.180 + source /etc/os-release 00:36:05.180 ++ NAME='Fedora Linux' 00:36:05.180 ++ VERSION='39 (Cloud Edition)' 00:36:05.180 ++ ID=fedora 00:36:05.180 ++ VERSION_ID=39 00:36:05.180 ++ VERSION_CODENAME= 00:36:05.180 ++ PLATFORM_ID=platform:f39 00:36:05.180 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:36:05.180 ++ ANSI_COLOR='0;38;2;60;110;180' 00:36:05.180 ++ LOGO=fedora-logo-icon 00:36:05.180 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:36:05.180 ++ HOME_URL=https://fedoraproject.org/ 00:36:05.180 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:36:05.180 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:36:05.180 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:36:05.180 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:36:05.180 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:36:05.180 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:36:05.180 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:36:05.180 ++ SUPPORT_END=2024-11-12 00:36:05.180 ++ VARIANT='Cloud Edition' 00:36:05.180 ++ VARIANT_ID=cloud 00:36:05.180 + uname -a 00:36:05.180 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:36:05.180 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:36:05.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:05.439 Hugepages 00:36:05.439 node hugesize free / total 00:36:05.439 node0 1048576kB 0 / 0 00:36:05.439 node0 2048kB 0 / 0 00:36:05.439 00:36:05.439 Type BDF Vendor Device NUMA Driver Device Block devices 00:36:05.698 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:36:05.698 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:36:05.698 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:36:05.698 + rm -f /tmp/spdk-ld-path 00:36:05.698 + source autorun-spdk.conf 00:36:05.698 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:36:05.698 ++ SPDK_TEST_NVMF=1 00:36:05.698 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:36:05.698 ++ SPDK_TEST_URING=1 00:36:05.698 ++ SPDK_TEST_USDT=1 00:36:05.698 ++ SPDK_RUN_UBSAN=1 00:36:05.698 ++ NET_TYPE=virt 00:36:05.698 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:36:05.698 ++ RUN_NIGHTLY=0 00:36:05.698 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:36:05.698 + [[ -n '' ]] 00:36:05.698 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:36:05.698 + for M in /var/spdk/build-*-manifest.txt 00:36:05.698 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:36:05.698 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:36:05.698 + for M in /var/spdk/build-*-manifest.txt 00:36:05.698 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:36:05.698 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:36:05.698 + for M in /var/spdk/build-*-manifest.txt 00:36:05.698 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:36:05.698 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:36:05.698 ++ uname 00:36:05.698 + [[ Linux == \L\i\n\u\x ]] 00:36:05.698 + sudo dmesg -T 00:36:05.698 + sudo dmesg --clear 00:36:05.698 + dmesg_pid=5256 00:36:05.698 + [[ Fedora Linux == FreeBSD ]] 00:36:05.698 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:36:05.698 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:36:05.698 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:36:05.698 + [[ -x /usr/src/fio-static/fio ]] 00:36:05.698 + sudo dmesg -Tw 00:36:05.698 + export FIO_BIN=/usr/src/fio-static/fio 00:36:05.698 + FIO_BIN=/usr/src/fio-static/fio 00:36:05.698 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:36:05.698 + [[ ! -v VFIO_QEMU_BIN ]] 00:36:05.698 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:36:05.698 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:36:05.698 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:36:05.698 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:36:05.699 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:36:05.699 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:36:05.699 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:36:05.699 09:48:13 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:36:05.699 09:48:13 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:36:05.699 09:48:13 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:36:05.699 09:48:13 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:36:05.699 09:48:13 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:36:05.699 09:48:13 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:36:05.699 09:48:13 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:36:05.699 09:48:13 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:36:05.699 09:48:13 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:36:05.699 09:48:13 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:36:05.699 09:48:13 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:36:05.699 09:48:13 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:36:05.699 09:48:13 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:36:05.958 09:48:13 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:36:05.958 09:48:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:05.958 09:48:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:36:05.958 09:48:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:05.958 09:48:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.958 09:48:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.958 09:48:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.958 09:48:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.958 09:48:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.958 09:48:13 -- paths/export.sh@5 -- $ export PATH 00:36:05.958 09:48:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.958 09:48:13 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:36:05.958 09:48:13 -- common/autobuild_common.sh@493 -- $ date +%s 00:36:05.958 09:48:13 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733737693.XXXXXX 00:36:05.958 09:48:13 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733737693.6gFVe2 00:36:05.958 09:48:13 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:36:05.958 09:48:13 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:36:05.958 09:48:13 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:36:05.958 09:48:13 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:36:05.958 09:48:13 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:36:05.958 09:48:13 -- common/autobuild_common.sh@509 -- $ get_config_params 00:36:05.958 09:48:13 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:36:05.958 09:48:13 -- common/autotest_common.sh@10 -- $ set +x 00:36:05.958 09:48:13 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:36:05.958 09:48:13 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:36:05.958 09:48:13 -- pm/common@17 -- $ local monitor 00:36:05.958 09:48:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:05.958 09:48:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:05.958 09:48:13 -- pm/common@25 -- $ sleep 1 00:36:05.958 09:48:13 -- pm/common@21 -- $ date +%s 00:36:05.958 09:48:13 -- pm/common@21 -- $ date +%s 00:36:05.958 09:48:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733737693 00:36:05.958 09:48:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733737693 00:36:05.958 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733737693_collect-cpu-load.pm.log 00:36:05.958 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733737693_collect-vmstat.pm.log 00:36:06.894 09:48:14 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:36:06.894 09:48:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:36:06.894 09:48:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:36:06.894 09:48:14 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:36:06.894 09:48:14 -- spdk/autobuild.sh@16 -- $ date -u 00:36:06.894 Mon Dec 9 09:48:14 AM UTC 2024 00:36:06.894 09:48:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:36:06.894 v25.01-pre-316-g070cd5283 00:36:06.894 09:48:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:36:06.894 09:48:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:36:06.894 09:48:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:36:06.894 09:48:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:36:06.894 09:48:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:36:06.894 09:48:14 -- common/autotest_common.sh@10 -- $ set +x 00:36:06.894 ************************************ 00:36:06.894 START TEST ubsan 00:36:06.894 ************************************ 00:36:06.894 using ubsan 00:36:06.894 09:48:14 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:36:06.894 00:36:06.894 real 0m0.000s 00:36:06.894 user 0m0.000s 00:36:06.894 sys 0m0.000s 00:36:06.894 09:48:14 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:36:06.894 09:48:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:36:06.894 ************************************ 00:36:06.894 END TEST ubsan 00:36:06.894 ************************************ 00:36:06.894 09:48:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:36:06.894 09:48:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:36:06.894 09:48:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:36:06.894 09:48:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:36:06.894 09:48:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:36:06.894 09:48:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:36:06.894 09:48:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:36:06.894 09:48:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:36:06.894 09:48:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:36:07.153 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:36:07.153 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:36:07.411 Using 'verbs' RDMA provider 00:36:23.228 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:36:35.434 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:36:35.434 Creating mk/config.mk...done. 00:36:35.434 Creating mk/cc.flags.mk...done. 00:36:35.434 Type 'make' to build. 00:36:35.434 09:48:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:36:35.434 09:48:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:36:35.434 09:48:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:36:35.434 09:48:41 -- common/autotest_common.sh@10 -- $ set +x 00:36:35.435 ************************************ 00:36:35.435 START TEST make 00:36:35.435 ************************************ 00:36:35.435 09:48:41 make -- common/autotest_common.sh@1129 -- $ make -j10 00:36:35.435 make[1]: Nothing to be done for 'all'. 00:36:47.643 The Meson build system 00:36:47.643 Version: 1.5.0 00:36:47.643 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:36:47.643 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:36:47.643 Build type: native build 00:36:47.643 Program cat found: YES (/usr/bin/cat) 00:36:47.643 Project name: DPDK 00:36:47.643 Project version: 24.03.0 00:36:47.643 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:36:47.643 C linker for the host machine: cc ld.bfd 2.40-14 00:36:47.643 Host machine cpu family: x86_64 00:36:47.643 Host machine cpu: x86_64 00:36:47.643 Message: ## Building in Developer Mode ## 00:36:47.643 Program pkg-config found: YES (/usr/bin/pkg-config) 00:36:47.643 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:36:47.643 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:36:47.643 Program python3 found: YES (/usr/bin/python3) 00:36:47.643 Program cat found: YES (/usr/bin/cat) 00:36:47.643 Compiler for C supports arguments -march=native: YES 00:36:47.643 Checking for size of "void *" : 8 00:36:47.643 Checking for size of "void *" : 8 (cached) 00:36:47.643 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:36:47.643 Library m found: YES 00:36:47.643 Library numa found: YES 00:36:47.643 Has header "numaif.h" : YES 00:36:47.643 Library fdt found: NO 00:36:47.643 Library execinfo found: NO 00:36:47.643 Has header "execinfo.h" : YES 00:36:47.643 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:36:47.643 Run-time dependency libarchive found: NO (tried pkgconfig) 00:36:47.643 Run-time dependency libbsd found: NO (tried pkgconfig) 00:36:47.643 Run-time dependency jansson found: NO (tried pkgconfig) 00:36:47.643 Run-time dependency openssl found: YES 3.1.1 00:36:47.643 Run-time dependency libpcap found: YES 1.10.4 00:36:47.643 Has header "pcap.h" with dependency libpcap: YES 00:36:47.643 Compiler for C supports arguments -Wcast-qual: YES 00:36:47.643 Compiler for C supports arguments -Wdeprecated: YES 00:36:47.643 Compiler for C supports arguments -Wformat: YES 00:36:47.643 Compiler for C supports arguments -Wformat-nonliteral: NO 00:36:47.643 Compiler for C supports arguments -Wformat-security: NO 00:36:47.643 Compiler for C supports arguments -Wmissing-declarations: YES 00:36:47.643 Compiler for C supports arguments -Wmissing-prototypes: YES 00:36:47.643 Compiler for C supports arguments -Wnested-externs: YES 00:36:47.643 Compiler for C supports arguments -Wold-style-definition: YES 00:36:47.643 Compiler for C supports arguments -Wpointer-arith: YES 00:36:47.643 Compiler for C supports arguments -Wsign-compare: YES 00:36:47.643 Compiler for C supports arguments -Wstrict-prototypes: YES 00:36:47.643 Compiler for C supports arguments -Wundef: YES 00:36:47.643 Compiler for C supports arguments -Wwrite-strings: YES 00:36:47.643 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:36:47.643 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:36:47.643 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:36:47.643 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:36:47.643 Program objdump found: YES (/usr/bin/objdump) 00:36:47.643 Compiler for C supports arguments -mavx512f: YES 00:36:47.643 Checking if "AVX512 checking" compiles: YES 00:36:47.643 Fetching value of define "__SSE4_2__" : 1 00:36:47.643 Fetching value of define "__AES__" : 1 00:36:47.643 Fetching value of define "__AVX__" : 1 00:36:47.643 Fetching value of define "__AVX2__" : 1 00:36:47.643 Fetching value of define "__AVX512BW__" : (undefined) 00:36:47.643 Fetching value of define "__AVX512CD__" : (undefined) 00:36:47.643 Fetching value of define "__AVX512DQ__" : (undefined) 00:36:47.643 Fetching value of define "__AVX512F__" : (undefined) 00:36:47.643 Fetching value of define "__AVX512VL__" : (undefined) 00:36:47.643 Fetching value of define "__PCLMUL__" : 1 00:36:47.643 Fetching value of define "__RDRND__" : 1 00:36:47.643 Fetching value of define "__RDSEED__" : 1 00:36:47.643 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:36:47.644 Fetching value of define "__znver1__" : (undefined) 00:36:47.644 Fetching value of define "__znver2__" : (undefined) 00:36:47.644 Fetching value of define "__znver3__" : (undefined) 00:36:47.644 Fetching value of define "__znver4__" : (undefined) 00:36:47.644 Compiler for C supports arguments -Wno-format-truncation: YES 00:36:47.644 Message: lib/log: Defining dependency "log" 00:36:47.644 Message: lib/kvargs: Defining dependency "kvargs" 00:36:47.644 Message: lib/telemetry: Defining dependency "telemetry" 00:36:47.644 Checking for function "getentropy" : NO 00:36:47.644 Message: lib/eal: Defining dependency "eal" 00:36:47.644 Message: lib/ring: Defining dependency "ring" 00:36:47.644 Message: lib/rcu: Defining dependency "rcu" 00:36:47.644 Message: lib/mempool: Defining dependency "mempool" 00:36:47.644 Message: lib/mbuf: Defining dependency "mbuf" 00:36:47.644 Fetching value of define "__PCLMUL__" : 1 (cached) 00:36:47.644 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:36:47.644 Compiler for C supports arguments -mpclmul: YES 00:36:47.644 Compiler for C supports arguments -maes: YES 00:36:47.644 Compiler for C supports arguments -mavx512f: YES (cached) 00:36:47.644 Compiler for C supports arguments -mavx512bw: YES 00:36:47.644 Compiler for C supports arguments -mavx512dq: YES 00:36:47.644 Compiler for C supports arguments -mavx512vl: YES 00:36:47.644 Compiler for C supports arguments -mvpclmulqdq: YES 00:36:47.644 Compiler for C supports arguments -mavx2: YES 00:36:47.644 Compiler for C supports arguments -mavx: YES 00:36:47.644 Message: lib/net: Defining dependency "net" 00:36:47.644 Message: lib/meter: Defining dependency "meter" 00:36:47.644 Message: lib/ethdev: Defining dependency "ethdev" 00:36:47.644 Message: lib/pci: Defining dependency "pci" 00:36:47.644 Message: lib/cmdline: Defining dependency "cmdline" 00:36:47.644 Message: lib/hash: Defining dependency "hash" 00:36:47.644 Message: lib/timer: Defining dependency "timer" 00:36:47.644 Message: lib/compressdev: Defining dependency "compressdev" 00:36:47.644 Message: lib/cryptodev: Defining dependency "cryptodev" 00:36:47.644 Message: lib/dmadev: Defining dependency "dmadev" 00:36:47.644 Compiler for C supports arguments -Wno-cast-qual: YES 00:36:47.644 Message: lib/power: Defining dependency "power" 00:36:47.644 Message: lib/reorder: Defining dependency "reorder" 00:36:47.644 Message: lib/security: Defining dependency "security" 00:36:47.644 Has header "linux/userfaultfd.h" : YES 00:36:47.644 Has header "linux/vduse.h" : YES 00:36:47.644 Message: lib/vhost: Defining dependency "vhost" 00:36:47.644 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:36:47.644 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:36:47.644 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:36:47.644 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:36:47.644 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:36:47.644 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:36:47.644 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:36:47.644 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:36:47.644 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:36:47.644 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:36:47.644 Program doxygen found: YES (/usr/local/bin/doxygen) 00:36:47.644 Configuring doxy-api-html.conf using configuration 00:36:47.644 Configuring doxy-api-man.conf using configuration 00:36:47.644 Program mandb found: YES (/usr/bin/mandb) 00:36:47.644 Program sphinx-build found: NO 00:36:47.644 Configuring rte_build_config.h using configuration 00:36:47.644 Message: 00:36:47.644 ================= 00:36:47.644 Applications Enabled 00:36:47.644 ================= 00:36:47.644 00:36:47.644 apps: 00:36:47.644 00:36:47.644 00:36:47.644 Message: 00:36:47.644 ================= 00:36:47.644 Libraries Enabled 00:36:47.644 ================= 00:36:47.644 00:36:47.644 libs: 00:36:47.644 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:36:47.644 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:36:47.644 cryptodev, dmadev, power, reorder, security, vhost, 00:36:47.644 00:36:47.644 Message: 00:36:47.644 =============== 00:36:47.644 Drivers Enabled 00:36:47.644 =============== 00:36:47.644 00:36:47.644 common: 00:36:47.644 00:36:47.644 bus: 00:36:47.644 pci, vdev, 00:36:47.644 mempool: 00:36:47.644 ring, 00:36:47.644 dma: 00:36:47.644 00:36:47.644 net: 00:36:47.644 00:36:47.644 crypto: 00:36:47.644 00:36:47.644 compress: 00:36:47.644 00:36:47.644 vdpa: 00:36:47.644 00:36:47.644 00:36:47.644 Message: 00:36:47.644 ================= 00:36:47.644 Content Skipped 00:36:47.644 ================= 00:36:47.644 00:36:47.644 apps: 00:36:47.644 dumpcap: explicitly disabled via build config 00:36:47.644 graph: explicitly disabled via build config 00:36:47.644 pdump: explicitly disabled via build config 00:36:47.644 proc-info: explicitly disabled via build config 00:36:47.644 test-acl: explicitly disabled via build config 00:36:47.644 test-bbdev: explicitly disabled via build config 00:36:47.644 test-cmdline: explicitly disabled via build config 00:36:47.644 test-compress-perf: explicitly disabled via build config 00:36:47.644 test-crypto-perf: explicitly disabled via build config 00:36:47.644 test-dma-perf: explicitly disabled via build config 00:36:47.644 test-eventdev: explicitly disabled via build config 00:36:47.644 test-fib: explicitly disabled via build config 00:36:47.644 test-flow-perf: explicitly disabled via build config 00:36:47.644 test-gpudev: explicitly disabled via build config 00:36:47.644 test-mldev: explicitly disabled via build config 00:36:47.644 test-pipeline: explicitly disabled via build config 00:36:47.644 test-pmd: explicitly disabled via build config 00:36:47.644 test-regex: explicitly disabled via build config 00:36:47.644 test-sad: explicitly disabled via build config 00:36:47.644 test-security-perf: explicitly disabled via build config 00:36:47.644 00:36:47.644 libs: 00:36:47.644 argparse: explicitly disabled via build config 00:36:47.644 metrics: explicitly disabled via build config 00:36:47.644 acl: explicitly disabled via build config 00:36:47.644 bbdev: explicitly disabled via build config 00:36:47.644 bitratestats: explicitly disabled via build config 00:36:47.644 bpf: explicitly disabled via build config 00:36:47.644 cfgfile: explicitly disabled via build config 00:36:47.644 distributor: explicitly disabled via build config 00:36:47.644 efd: explicitly disabled via build config 00:36:47.644 eventdev: explicitly disabled via build config 00:36:47.644 dispatcher: explicitly disabled via build config 00:36:47.644 gpudev: explicitly disabled via build config 00:36:47.644 gro: explicitly disabled via build config 00:36:47.644 gso: explicitly disabled via build config 00:36:47.644 ip_frag: explicitly disabled via build config 00:36:47.644 jobstats: explicitly disabled via build config 00:36:47.644 latencystats: explicitly disabled via build config 00:36:47.644 lpm: explicitly disabled via build config 00:36:47.644 member: explicitly disabled via build config 00:36:47.644 pcapng: explicitly disabled via build config 00:36:47.644 rawdev: explicitly disabled via build config 00:36:47.644 regexdev: explicitly disabled via build config 00:36:47.644 mldev: explicitly disabled via build config 00:36:47.644 rib: explicitly disabled via build config 00:36:47.644 sched: explicitly disabled via build config 00:36:47.644 stack: explicitly disabled via build config 00:36:47.644 ipsec: explicitly disabled via build config 00:36:47.644 pdcp: explicitly disabled via build config 00:36:47.644 fib: explicitly disabled via build config 00:36:47.644 port: explicitly disabled via build config 00:36:47.644 pdump: explicitly disabled via build config 00:36:47.644 table: explicitly disabled via build config 00:36:47.644 pipeline: explicitly disabled via build config 00:36:47.644 graph: explicitly disabled via build config 00:36:47.644 node: explicitly disabled via build config 00:36:47.644 00:36:47.644 drivers: 00:36:47.644 common/cpt: not in enabled drivers build config 00:36:47.644 common/dpaax: not in enabled drivers build config 00:36:47.644 common/iavf: not in enabled drivers build config 00:36:47.644 common/idpf: not in enabled drivers build config 00:36:47.644 common/ionic: not in enabled drivers build config 00:36:47.644 common/mvep: not in enabled drivers build config 00:36:47.645 common/octeontx: not in enabled drivers build config 00:36:47.645 bus/auxiliary: not in enabled drivers build config 00:36:47.645 bus/cdx: not in enabled drivers build config 00:36:47.645 bus/dpaa: not in enabled drivers build config 00:36:47.645 bus/fslmc: not in enabled drivers build config 00:36:47.645 bus/ifpga: not in enabled drivers build config 00:36:47.645 bus/platform: not in enabled drivers build config 00:36:47.645 bus/uacce: not in enabled drivers build config 00:36:47.645 bus/vmbus: not in enabled drivers build config 00:36:47.645 common/cnxk: not in enabled drivers build config 00:36:47.645 common/mlx5: not in enabled drivers build config 00:36:47.645 common/nfp: not in enabled drivers build config 00:36:47.645 common/nitrox: not in enabled drivers build config 00:36:47.645 common/qat: not in enabled drivers build config 00:36:47.645 common/sfc_efx: not in enabled drivers build config 00:36:47.645 mempool/bucket: not in enabled drivers build config 00:36:47.645 mempool/cnxk: not in enabled drivers build config 00:36:47.645 mempool/dpaa: not in enabled drivers build config 00:36:47.645 mempool/dpaa2: not in enabled drivers build config 00:36:47.645 mempool/octeontx: not in enabled drivers build config 00:36:47.645 mempool/stack: not in enabled drivers build config 00:36:47.645 dma/cnxk: not in enabled drivers build config 00:36:47.645 dma/dpaa: not in enabled drivers build config 00:36:47.645 dma/dpaa2: not in enabled drivers build config 00:36:47.645 dma/hisilicon: not in enabled drivers build config 00:36:47.645 dma/idxd: not in enabled drivers build config 00:36:47.645 dma/ioat: not in enabled drivers build config 00:36:47.645 dma/skeleton: not in enabled drivers build config 00:36:47.645 net/af_packet: not in enabled drivers build config 00:36:47.645 net/af_xdp: not in enabled drivers build config 00:36:47.645 net/ark: not in enabled drivers build config 00:36:47.645 net/atlantic: not in enabled drivers build config 00:36:47.645 net/avp: not in enabled drivers build config 00:36:47.645 net/axgbe: not in enabled drivers build config 00:36:47.645 net/bnx2x: not in enabled drivers build config 00:36:47.645 net/bnxt: not in enabled drivers build config 00:36:47.645 net/bonding: not in enabled drivers build config 00:36:47.645 net/cnxk: not in enabled drivers build config 00:36:47.645 net/cpfl: not in enabled drivers build config 00:36:47.645 net/cxgbe: not in enabled drivers build config 00:36:47.645 net/dpaa: not in enabled drivers build config 00:36:47.645 net/dpaa2: not in enabled drivers build config 00:36:47.645 net/e1000: not in enabled drivers build config 00:36:47.645 net/ena: not in enabled drivers build config 00:36:47.645 net/enetc: not in enabled drivers build config 00:36:47.645 net/enetfec: not in enabled drivers build config 00:36:47.645 net/enic: not in enabled drivers build config 00:36:47.645 net/failsafe: not in enabled drivers build config 00:36:47.645 net/fm10k: not in enabled drivers build config 00:36:47.645 net/gve: not in enabled drivers build config 00:36:47.645 net/hinic: not in enabled drivers build config 00:36:47.645 net/hns3: not in enabled drivers build config 00:36:47.645 net/i40e: not in enabled drivers build config 00:36:47.645 net/iavf: not in enabled drivers build config 00:36:47.645 net/ice: not in enabled drivers build config 00:36:47.645 net/idpf: not in enabled drivers build config 00:36:47.645 net/igc: not in enabled drivers build config 00:36:47.645 net/ionic: not in enabled drivers build config 00:36:47.645 net/ipn3ke: not in enabled drivers build config 00:36:47.645 net/ixgbe: not in enabled drivers build config 00:36:47.645 net/mana: not in enabled drivers build config 00:36:47.645 net/memif: not in enabled drivers build config 00:36:47.645 net/mlx4: not in enabled drivers build config 00:36:47.645 net/mlx5: not in enabled drivers build config 00:36:47.645 net/mvneta: not in enabled drivers build config 00:36:47.645 net/mvpp2: not in enabled drivers build config 00:36:47.645 net/netvsc: not in enabled drivers build config 00:36:47.645 net/nfb: not in enabled drivers build config 00:36:47.645 net/nfp: not in enabled drivers build config 00:36:47.645 net/ngbe: not in enabled drivers build config 00:36:47.645 net/null: not in enabled drivers build config 00:36:47.645 net/octeontx: not in enabled drivers build config 00:36:47.645 net/octeon_ep: not in enabled drivers build config 00:36:47.645 net/pcap: not in enabled drivers build config 00:36:47.645 net/pfe: not in enabled drivers build config 00:36:47.645 net/qede: not in enabled drivers build config 00:36:47.645 net/ring: not in enabled drivers build config 00:36:47.645 net/sfc: not in enabled drivers build config 00:36:47.645 net/softnic: not in enabled drivers build config 00:36:47.645 net/tap: not in enabled drivers build config 00:36:47.645 net/thunderx: not in enabled drivers build config 00:36:47.645 net/txgbe: not in enabled drivers build config 00:36:47.645 net/vdev_netvsc: not in enabled drivers build config 00:36:47.645 net/vhost: not in enabled drivers build config 00:36:47.645 net/virtio: not in enabled drivers build config 00:36:47.645 net/vmxnet3: not in enabled drivers build config 00:36:47.645 raw/*: missing internal dependency, "rawdev" 00:36:47.645 crypto/armv8: not in enabled drivers build config 00:36:47.645 crypto/bcmfs: not in enabled drivers build config 00:36:47.645 crypto/caam_jr: not in enabled drivers build config 00:36:47.645 crypto/ccp: not in enabled drivers build config 00:36:47.645 crypto/cnxk: not in enabled drivers build config 00:36:47.645 crypto/dpaa_sec: not in enabled drivers build config 00:36:47.645 crypto/dpaa2_sec: not in enabled drivers build config 00:36:47.645 crypto/ipsec_mb: not in enabled drivers build config 00:36:47.645 crypto/mlx5: not in enabled drivers build config 00:36:47.645 crypto/mvsam: not in enabled drivers build config 00:36:47.645 crypto/nitrox: not in enabled drivers build config 00:36:47.645 crypto/null: not in enabled drivers build config 00:36:47.645 crypto/octeontx: not in enabled drivers build config 00:36:47.645 crypto/openssl: not in enabled drivers build config 00:36:47.645 crypto/scheduler: not in enabled drivers build config 00:36:47.645 crypto/uadk: not in enabled drivers build config 00:36:47.645 crypto/virtio: not in enabled drivers build config 00:36:47.645 compress/isal: not in enabled drivers build config 00:36:47.645 compress/mlx5: not in enabled drivers build config 00:36:47.645 compress/nitrox: not in enabled drivers build config 00:36:47.645 compress/octeontx: not in enabled drivers build config 00:36:47.645 compress/zlib: not in enabled drivers build config 00:36:47.645 regex/*: missing internal dependency, "regexdev" 00:36:47.645 ml/*: missing internal dependency, "mldev" 00:36:47.645 vdpa/ifc: not in enabled drivers build config 00:36:47.645 vdpa/mlx5: not in enabled drivers build config 00:36:47.645 vdpa/nfp: not in enabled drivers build config 00:36:47.645 vdpa/sfc: not in enabled drivers build config 00:36:47.645 event/*: missing internal dependency, "eventdev" 00:36:47.645 baseband/*: missing internal dependency, "bbdev" 00:36:47.645 gpu/*: missing internal dependency, "gpudev" 00:36:47.645 00:36:47.645 00:36:47.645 Build targets in project: 85 00:36:47.645 00:36:47.645 DPDK 24.03.0 00:36:47.645 00:36:47.645 User defined options 00:36:47.645 buildtype : debug 00:36:47.645 default_library : shared 00:36:47.645 libdir : lib 00:36:47.645 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:36:47.645 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:36:47.645 c_link_args : 00:36:47.645 cpu_instruction_set: native 00:36:47.645 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:36:47.645 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:36:47.645 enable_docs : false 00:36:47.645 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:36:47.645 enable_kmods : false 00:36:47.645 max_lcores : 128 00:36:47.645 tests : false 00:36:47.645 00:36:47.645 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:36:47.645 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:36:47.645 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:36:47.645 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:36:47.645 [3/268] Linking static target lib/librte_kvargs.a 00:36:47.645 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:36:47.645 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:36:47.645 [6/268] Linking static target lib/librte_log.a 00:36:47.904 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:36:47.904 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:36:48.163 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:36:48.163 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:36:48.163 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:36:48.163 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:36:48.163 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:36:48.163 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:36:48.421 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:36:48.421 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:36:48.421 [17/268] Linking static target lib/librte_telemetry.a 00:36:48.421 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:36:48.422 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:36:48.422 [20/268] Linking target lib/librte_log.so.24.1 00:36:48.681 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:36:48.940 [22/268] Linking target lib/librte_kvargs.so.24.1 00:36:49.199 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:36:49.199 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:36:49.199 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:36:49.199 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:36:49.199 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:36:49.199 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:36:49.199 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:36:49.199 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:36:49.199 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:36:49.199 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:36:49.199 [33/268] Linking target lib/librte_telemetry.so.24.1 00:36:49.458 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:36:49.458 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:36:49.458 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:36:49.718 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:36:49.977 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:36:49.977 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:36:49.977 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:36:50.236 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:36:50.236 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:36:50.236 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:36:50.236 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:36:50.236 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:36:50.236 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:36:50.236 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:36:50.496 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:36:50.496 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:36:50.496 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:36:50.756 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:36:51.016 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:36:51.016 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:36:51.275 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:36:51.275 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:36:51.275 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:36:51.275 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:36:51.275 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:36:51.275 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:36:51.275 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:36:51.275 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:36:51.534 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:36:51.794 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:36:52.054 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:36:52.054 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:36:52.054 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:36:52.054 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:36:52.314 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:36:52.315 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:36:52.315 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:36:52.574 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:36:52.574 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:36:52.574 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:36:52.574 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:36:52.833 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:36:52.833 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:36:52.833 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:36:53.093 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:36:53.093 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:36:53.093 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:36:53.093 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:36:53.093 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:36:53.093 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:36:53.093 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:36:53.093 [85/268] Linking static target lib/librte_eal.a 00:36:53.663 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:36:53.663 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:36:53.663 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:36:53.663 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:36:53.663 [90/268] Linking static target lib/librte_rcu.a 00:36:53.663 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:36:53.663 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:36:53.923 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:36:53.923 [94/268] Linking static target lib/librte_ring.a 00:36:53.923 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:36:53.923 [96/268] Linking static target lib/librte_mempool.a 00:36:53.923 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:36:54.182 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:36:54.182 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:36:54.182 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:36:54.182 [101/268] Linking static target lib/librte_mbuf.a 00:36:54.182 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:36:54.442 [103/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:36:54.442 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:36:54.442 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:36:54.702 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:36:54.702 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:36:54.965 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:36:54.965 [109/268] Linking static target lib/librte_meter.a 00:36:54.965 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:36:54.965 [111/268] Linking static target lib/librte_net.a 00:36:55.241 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:36:55.241 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:36:55.241 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:36:55.241 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:36:55.241 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:36:55.521 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:36:55.521 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:36:55.521 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:36:55.800 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:36:56.089 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:36:56.383 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:36:56.383 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:36:56.667 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:36:56.667 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:36:56.667 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:36:56.667 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:36:56.667 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:36:56.667 [129/268] Linking static target lib/librte_pci.a 00:36:56.667 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:36:56.667 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:36:56.667 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:36:56.944 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:36:56.944 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:36:56.944 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:36:56.944 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:36:57.219 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:36:57.219 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:36:57.219 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:36:57.219 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:36:57.219 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:36:57.219 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:36:57.219 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:36:57.219 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:36:57.219 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:36:57.219 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:36:57.219 [147/268] Linking static target lib/librte_ethdev.a 00:36:57.478 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:36:57.478 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:36:57.478 [150/268] Linking static target lib/librte_cmdline.a 00:36:57.740 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:36:57.740 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:36:57.740 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:36:57.997 [154/268] Linking static target lib/librte_timer.a 00:36:57.997 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:36:57.997 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:36:57.997 [157/268] Linking static target lib/librte_hash.a 00:36:58.255 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:36:58.255 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:36:58.513 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:36:58.513 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:36:58.513 [162/268] Linking static target lib/librte_compressdev.a 00:36:58.513 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:36:58.513 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:36:58.771 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:36:59.029 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:36:59.029 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:36:59.029 [168/268] Linking static target lib/librte_dmadev.a 00:36:59.029 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:36:59.287 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:36:59.287 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:36:59.287 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:36:59.287 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:36:59.287 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:36:59.545 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:36:59.545 [176/268] Linking static target lib/librte_cryptodev.a 00:36:59.545 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:36:59.545 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:36:59.803 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:37:00.061 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:37:00.061 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:37:00.061 [182/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:37:00.061 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:37:00.061 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:37:00.320 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:37:00.320 [186/268] Linking static target lib/librte_power.a 00:37:00.579 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:37:00.579 [188/268] Linking static target lib/librte_reorder.a 00:37:00.579 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:37:00.837 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:37:00.837 [191/268] Linking static target lib/librte_security.a 00:37:00.837 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:37:00.837 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:37:01.096 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:37:01.354 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:37:01.612 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:37:01.612 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:37:01.612 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:37:01.869 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:37:01.869 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:37:01.869 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:37:02.127 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:37:02.385 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:37:02.385 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:37:02.385 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:37:02.385 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:37:02.669 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:37:02.669 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:37:02.669 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:37:02.669 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:37:02.669 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:37:02.937 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:37:02.937 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:37:02.937 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:37:02.937 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:37:02.937 [216/268] Linking static target drivers/librte_bus_vdev.a 00:37:02.937 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:37:02.937 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:37:02.937 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:37:02.937 [220/268] Linking static target drivers/librte_bus_pci.a 00:37:03.195 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:37:03.195 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:37:03.195 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:37:03.452 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:37:03.452 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:37:03.452 [226/268] Linking static target drivers/librte_mempool_ring.a 00:37:03.452 [227/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:37:03.710 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:37:04.275 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:37:04.275 [230/268] Linking static target lib/librte_vhost.a 00:37:04.841 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:37:04.841 [232/268] Linking target lib/librte_eal.so.24.1 00:37:05.099 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:37:05.099 [234/268] Linking target lib/librte_pci.so.24.1 00:37:05.099 [235/268] Linking target lib/librte_ring.so.24.1 00:37:05.099 [236/268] Linking target lib/librte_dmadev.so.24.1 00:37:05.099 [237/268] Linking target lib/librte_meter.so.24.1 00:37:05.099 [238/268] Linking target lib/librte_timer.so.24.1 00:37:05.099 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:37:05.099 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:37:05.099 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:37:05.099 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:37:05.358 [243/268] Linking target lib/librte_mempool.so.24.1 00:37:05.358 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:37:05.358 [245/268] Linking target lib/librte_rcu.so.24.1 00:37:05.358 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:37:05.358 [247/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:37:05.358 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:37:05.358 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:37:05.358 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:37:05.358 [251/268] Linking target lib/librte_mbuf.so.24.1 00:37:05.358 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:37:05.616 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:37:05.616 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:37:05.616 [255/268] Linking target lib/librte_net.so.24.1 00:37:05.616 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:37:05.616 [257/268] Linking target lib/librte_compressdev.so.24.1 00:37:05.616 [258/268] Linking target lib/librte_reorder.so.24.1 00:37:05.874 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:37:05.874 [260/268] Linking target lib/librte_cmdline.so.24.1 00:37:05.874 [261/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:37:05.874 [262/268] Linking target lib/librte_hash.so.24.1 00:37:05.874 [263/268] Linking target lib/librte_ethdev.so.24.1 00:37:05.874 [264/268] Linking target lib/librte_security.so.24.1 00:37:05.874 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:37:05.874 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:37:06.133 [267/268] Linking target lib/librte_power.so.24.1 00:37:06.133 [268/268] Linking target lib/librte_vhost.so.24.1 00:37:06.133 INFO: autodetecting backend as ninja 00:37:06.133 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:37:32.673 CC lib/log/log_flags.o 00:37:32.673 CC lib/log/log.o 00:37:32.673 CC lib/ut_mock/mock.o 00:37:32.673 CC lib/log/log_deprecated.o 00:37:32.673 CC lib/ut/ut.o 00:37:32.673 LIB libspdk_ut.a 00:37:32.673 LIB libspdk_ut_mock.a 00:37:32.673 LIB libspdk_log.a 00:37:32.673 SO libspdk_ut.so.2.0 00:37:32.673 SO libspdk_ut_mock.so.6.0 00:37:32.673 SO libspdk_log.so.7.1 00:37:32.673 SYMLINK libspdk_ut_mock.so 00:37:32.673 SYMLINK libspdk_ut.so 00:37:32.673 SYMLINK libspdk_log.so 00:37:32.673 CXX lib/trace_parser/trace.o 00:37:32.673 CC lib/util/base64.o 00:37:32.673 CC lib/util/bit_array.o 00:37:32.673 CC lib/ioat/ioat.o 00:37:32.673 CC lib/dma/dma.o 00:37:32.673 CC lib/util/cpuset.o 00:37:32.673 CC lib/util/crc16.o 00:37:32.673 CC lib/util/crc32c.o 00:37:32.673 CC lib/util/crc32.o 00:37:32.930 CC lib/vfio_user/host/vfio_user_pci.o 00:37:32.930 CC lib/vfio_user/host/vfio_user.o 00:37:32.930 CC lib/util/crc32_ieee.o 00:37:32.930 CC lib/util/crc64.o 00:37:32.930 CC lib/util/dif.o 00:37:32.930 CC lib/util/fd.o 00:37:33.188 LIB libspdk_dma.a 00:37:33.188 CC lib/util/fd_group.o 00:37:33.188 SO libspdk_dma.so.5.0 00:37:33.188 CC lib/util/file.o 00:37:33.188 CC lib/util/hexlify.o 00:37:33.188 SYMLINK libspdk_dma.so 00:37:33.188 CC lib/util/iov.o 00:37:33.188 LIB libspdk_ioat.a 00:37:33.188 CC lib/util/math.o 00:37:33.188 SO libspdk_ioat.so.7.0 00:37:33.188 CC lib/util/net.o 00:37:33.188 LIB libspdk_vfio_user.a 00:37:33.188 SO libspdk_vfio_user.so.5.0 00:37:33.188 SYMLINK libspdk_ioat.so 00:37:33.188 CC lib/util/pipe.o 00:37:33.188 CC lib/util/strerror_tls.o 00:37:33.188 CC lib/util/string.o 00:37:33.445 SYMLINK libspdk_vfio_user.so 00:37:33.445 CC lib/util/uuid.o 00:37:33.445 CC lib/util/xor.o 00:37:33.445 CC lib/util/zipf.o 00:37:33.445 CC lib/util/md5.o 00:37:33.708 LIB libspdk_util.a 00:37:33.966 LIB libspdk_trace_parser.a 00:37:33.966 SO libspdk_util.so.10.1 00:37:33.966 SO libspdk_trace_parser.so.6.0 00:37:33.966 SYMLINK libspdk_trace_parser.so 00:37:33.966 SYMLINK libspdk_util.so 00:37:34.223 CC lib/vmd/vmd.o 00:37:34.223 CC lib/idxd/idxd.o 00:37:34.223 CC lib/idxd/idxd_user.o 00:37:34.223 CC lib/idxd/idxd_kernel.o 00:37:34.224 CC lib/vmd/led.o 00:37:34.224 CC lib/rdma_utils/rdma_utils.o 00:37:34.224 CC lib/env_dpdk/env.o 00:37:34.224 CC lib/env_dpdk/memory.o 00:37:34.224 CC lib/json/json_parse.o 00:37:34.224 CC lib/conf/conf.o 00:37:34.481 CC lib/json/json_util.o 00:37:34.481 CC lib/json/json_write.o 00:37:34.481 CC lib/env_dpdk/pci.o 00:37:34.481 CC lib/env_dpdk/init.o 00:37:34.481 LIB libspdk_conf.a 00:37:34.481 SO libspdk_conf.so.6.0 00:37:34.481 LIB libspdk_rdma_utils.a 00:37:34.481 SYMLINK libspdk_conf.so 00:37:34.481 CC lib/env_dpdk/threads.o 00:37:34.481 SO libspdk_rdma_utils.so.1.0 00:37:34.739 SYMLINK libspdk_rdma_utils.so 00:37:34.739 CC lib/env_dpdk/pci_ioat.o 00:37:34.739 CC lib/env_dpdk/pci_virtio.o 00:37:34.739 LIB libspdk_json.a 00:37:34.739 CC lib/rdma_provider/common.o 00:37:34.739 LIB libspdk_idxd.a 00:37:34.739 CC lib/rdma_provider/rdma_provider_verbs.o 00:37:34.740 SO libspdk_json.so.6.0 00:37:34.740 CC lib/env_dpdk/pci_vmd.o 00:37:34.740 CC lib/env_dpdk/pci_idxd.o 00:37:34.740 SO libspdk_idxd.so.12.1 00:37:34.740 CC lib/env_dpdk/pci_event.o 00:37:34.998 SYMLINK libspdk_json.so 00:37:34.998 LIB libspdk_vmd.a 00:37:34.998 CC lib/env_dpdk/sigbus_handler.o 00:37:34.998 SYMLINK libspdk_idxd.so 00:37:34.998 CC lib/env_dpdk/pci_dpdk.o 00:37:34.998 SO libspdk_vmd.so.6.0 00:37:34.998 CC lib/env_dpdk/pci_dpdk_2207.o 00:37:34.998 SYMLINK libspdk_vmd.so 00:37:34.998 CC lib/env_dpdk/pci_dpdk_2211.o 00:37:34.998 LIB libspdk_rdma_provider.a 00:37:34.998 SO libspdk_rdma_provider.so.7.0 00:37:34.998 SYMLINK libspdk_rdma_provider.so 00:37:35.255 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:37:35.255 CC lib/jsonrpc/jsonrpc_server.o 00:37:35.255 CC lib/jsonrpc/jsonrpc_client.o 00:37:35.255 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:37:35.513 LIB libspdk_jsonrpc.a 00:37:35.513 SO libspdk_jsonrpc.so.6.0 00:37:35.513 SYMLINK libspdk_jsonrpc.so 00:37:35.770 LIB libspdk_env_dpdk.a 00:37:35.770 CC lib/rpc/rpc.o 00:37:36.028 SO libspdk_env_dpdk.so.15.1 00:37:36.028 LIB libspdk_rpc.a 00:37:36.028 SYMLINK libspdk_env_dpdk.so 00:37:36.028 SO libspdk_rpc.so.6.0 00:37:36.287 SYMLINK libspdk_rpc.so 00:37:36.545 CC lib/keyring/keyring.o 00:37:36.545 CC lib/keyring/keyring_rpc.o 00:37:36.545 CC lib/notify/notify.o 00:37:36.545 CC lib/notify/notify_rpc.o 00:37:36.545 CC lib/trace/trace.o 00:37:36.545 CC lib/trace/trace_rpc.o 00:37:36.545 CC lib/trace/trace_flags.o 00:37:36.545 LIB libspdk_notify.a 00:37:36.545 SO libspdk_notify.so.6.0 00:37:36.804 LIB libspdk_keyring.a 00:37:36.804 LIB libspdk_trace.a 00:37:36.804 SYMLINK libspdk_notify.so 00:37:36.804 SO libspdk_keyring.so.2.0 00:37:36.804 SO libspdk_trace.so.11.0 00:37:36.804 SYMLINK libspdk_keyring.so 00:37:36.804 SYMLINK libspdk_trace.so 00:37:37.063 CC lib/thread/thread.o 00:37:37.063 CC lib/thread/iobuf.o 00:37:37.063 CC lib/sock/sock.o 00:37:37.063 CC lib/sock/sock_rpc.o 00:37:37.630 LIB libspdk_sock.a 00:37:37.630 SO libspdk_sock.so.10.0 00:37:37.630 SYMLINK libspdk_sock.so 00:37:37.888 CC lib/nvme/nvme_ctrlr_cmd.o 00:37:37.888 CC lib/nvme/nvme_ctrlr.o 00:37:37.888 CC lib/nvme/nvme_fabric.o 00:37:37.888 CC lib/nvme/nvme_ns.o 00:37:37.888 CC lib/nvme/nvme_ns_cmd.o 00:37:37.888 CC lib/nvme/nvme_pcie_common.o 00:37:37.888 CC lib/nvme/nvme_pcie.o 00:37:37.888 CC lib/nvme/nvme_qpair.o 00:37:37.888 CC lib/nvme/nvme.o 00:37:38.824 LIB libspdk_thread.a 00:37:38.824 CC lib/nvme/nvme_quirks.o 00:37:38.824 SO libspdk_thread.so.11.0 00:37:38.824 CC lib/nvme/nvme_transport.o 00:37:38.824 SYMLINK libspdk_thread.so 00:37:38.824 CC lib/nvme/nvme_discovery.o 00:37:38.824 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:37:38.824 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:37:38.824 CC lib/nvme/nvme_tcp.o 00:37:38.824 CC lib/nvme/nvme_opal.o 00:37:39.391 CC lib/accel/accel.o 00:37:39.391 CC lib/blob/blobstore.o 00:37:39.391 CC lib/blob/request.o 00:37:39.391 CC lib/blob/zeroes.o 00:37:39.391 CC lib/blob/blob_bs_dev.o 00:37:39.391 CC lib/accel/accel_rpc.o 00:37:39.391 CC lib/accel/accel_sw.o 00:37:39.650 CC lib/nvme/nvme_io_msg.o 00:37:39.650 CC lib/nvme/nvme_poll_group.o 00:37:39.650 CC lib/nvme/nvme_zns.o 00:37:39.650 CC lib/nvme/nvme_stubs.o 00:37:39.909 CC lib/init/json_config.o 00:37:39.909 CC lib/virtio/virtio.o 00:37:40.168 CC lib/init/subsystem.o 00:37:40.168 CC lib/virtio/virtio_vhost_user.o 00:37:40.426 CC lib/virtio/virtio_vfio_user.o 00:37:40.426 CC lib/virtio/virtio_pci.o 00:37:40.426 CC lib/init/subsystem_rpc.o 00:37:40.426 CC lib/init/rpc.o 00:37:40.426 CC lib/nvme/nvme_auth.o 00:37:40.426 LIB libspdk_accel.a 00:37:40.426 SO libspdk_accel.so.16.0 00:37:40.426 CC lib/nvme/nvme_cuse.o 00:37:40.426 CC lib/fsdev/fsdev.o 00:37:40.426 CC lib/nvme/nvme_rdma.o 00:37:40.426 SYMLINK libspdk_accel.so 00:37:40.426 CC lib/fsdev/fsdev_io.o 00:37:40.426 LIB libspdk_init.a 00:37:40.426 CC lib/fsdev/fsdev_rpc.o 00:37:40.685 SO libspdk_init.so.6.0 00:37:40.685 SYMLINK libspdk_init.so 00:37:40.685 LIB libspdk_virtio.a 00:37:40.685 SO libspdk_virtio.so.7.0 00:37:40.685 SYMLINK libspdk_virtio.so 00:37:40.685 CC lib/bdev/bdev.o 00:37:40.685 CC lib/bdev/bdev_zone.o 00:37:40.685 CC lib/bdev/bdev_rpc.o 00:37:40.685 CC lib/event/app.o 00:37:40.943 CC lib/bdev/part.o 00:37:40.943 CC lib/bdev/scsi_nvme.o 00:37:41.201 CC lib/event/reactor.o 00:37:41.201 LIB libspdk_fsdev.a 00:37:41.201 SO libspdk_fsdev.so.2.0 00:37:41.201 CC lib/event/log_rpc.o 00:37:41.201 CC lib/event/app_rpc.o 00:37:41.201 CC lib/event/scheduler_static.o 00:37:41.201 SYMLINK libspdk_fsdev.so 00:37:41.459 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:37:41.459 LIB libspdk_event.a 00:37:41.718 SO libspdk_event.so.14.0 00:37:41.718 SYMLINK libspdk_event.so 00:37:41.976 LIB libspdk_nvme.a 00:37:41.976 LIB libspdk_fuse_dispatcher.a 00:37:41.976 SO libspdk_fuse_dispatcher.so.1.0 00:37:42.234 SYMLINK libspdk_fuse_dispatcher.so 00:37:42.235 SO libspdk_nvme.so.15.0 00:37:42.493 LIB libspdk_blob.a 00:37:42.493 SO libspdk_blob.so.12.0 00:37:42.493 SYMLINK libspdk_nvme.so 00:37:42.493 SYMLINK libspdk_blob.so 00:37:42.751 CC lib/blobfs/blobfs.o 00:37:42.751 CC lib/blobfs/tree.o 00:37:42.751 CC lib/lvol/lvol.o 00:37:43.685 LIB libspdk_bdev.a 00:37:43.685 LIB libspdk_blobfs.a 00:37:43.685 SO libspdk_blobfs.so.11.0 00:37:43.685 SO libspdk_bdev.so.17.0 00:37:43.685 LIB libspdk_lvol.a 00:37:43.685 SYMLINK libspdk_blobfs.so 00:37:43.685 SO libspdk_lvol.so.11.0 00:37:43.685 SYMLINK libspdk_bdev.so 00:37:43.943 SYMLINK libspdk_lvol.so 00:37:43.943 CC lib/nvmf/ctrlr.o 00:37:43.943 CC lib/nvmf/ctrlr_discovery.o 00:37:43.943 CC lib/nvmf/ctrlr_bdev.o 00:37:43.943 CC lib/nvmf/subsystem.o 00:37:43.943 CC lib/nvmf/nvmf.o 00:37:43.943 CC lib/nvmf/nvmf_rpc.o 00:37:43.943 CC lib/scsi/dev.o 00:37:43.943 CC lib/nbd/nbd.o 00:37:43.943 CC lib/ublk/ublk.o 00:37:43.943 CC lib/ftl/ftl_core.o 00:37:44.201 CC lib/scsi/lun.o 00:37:44.458 CC lib/nbd/nbd_rpc.o 00:37:44.458 CC lib/scsi/port.o 00:37:44.458 CC lib/ftl/ftl_init.o 00:37:44.458 CC lib/ftl/ftl_layout.o 00:37:44.458 LIB libspdk_nbd.a 00:37:44.458 SO libspdk_nbd.so.7.0 00:37:44.716 CC lib/ublk/ublk_rpc.o 00:37:44.716 CC lib/scsi/scsi.o 00:37:44.716 SYMLINK libspdk_nbd.so 00:37:44.716 CC lib/nvmf/transport.o 00:37:44.716 CC lib/nvmf/tcp.o 00:37:44.716 CC lib/nvmf/stubs.o 00:37:44.716 CC lib/scsi/scsi_bdev.o 00:37:44.716 LIB libspdk_ublk.a 00:37:44.975 CC lib/ftl/ftl_debug.o 00:37:44.975 SO libspdk_ublk.so.3.0 00:37:44.975 CC lib/nvmf/mdns_server.o 00:37:44.975 SYMLINK libspdk_ublk.so 00:37:44.975 CC lib/nvmf/rdma.o 00:37:44.975 CC lib/nvmf/auth.o 00:37:45.233 CC lib/ftl/ftl_io.o 00:37:45.233 CC lib/scsi/scsi_pr.o 00:37:45.233 CC lib/ftl/ftl_sb.o 00:37:45.233 CC lib/ftl/ftl_l2p.o 00:37:45.491 CC lib/ftl/ftl_l2p_flat.o 00:37:45.491 CC lib/scsi/scsi_rpc.o 00:37:45.491 CC lib/scsi/task.o 00:37:45.491 CC lib/ftl/ftl_nv_cache.o 00:37:45.491 CC lib/ftl/ftl_band.o 00:37:45.491 CC lib/ftl/ftl_band_ops.o 00:37:45.491 CC lib/ftl/ftl_writer.o 00:37:45.750 CC lib/ftl/ftl_rq.o 00:37:45.750 LIB libspdk_scsi.a 00:37:45.750 SO libspdk_scsi.so.9.0 00:37:45.750 CC lib/ftl/ftl_reloc.o 00:37:45.750 CC lib/ftl/ftl_l2p_cache.o 00:37:45.750 SYMLINK libspdk_scsi.so 00:37:45.750 CC lib/ftl/ftl_p2l.o 00:37:46.009 CC lib/ftl/ftl_p2l_log.o 00:37:46.009 CC lib/ftl/mngt/ftl_mngt.o 00:37:46.009 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:37:46.009 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:37:46.266 CC lib/ftl/mngt/ftl_mngt_startup.o 00:37:46.266 CC lib/ftl/mngt/ftl_mngt_md.o 00:37:46.266 CC lib/ftl/mngt/ftl_mngt_misc.o 00:37:46.266 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:37:46.266 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:37:46.266 CC lib/ftl/mngt/ftl_mngt_band.o 00:37:46.266 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:37:46.266 CC lib/iscsi/conn.o 00:37:46.524 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:37:46.524 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:37:46.524 CC lib/iscsi/init_grp.o 00:37:46.524 CC lib/iscsi/iscsi.o 00:37:46.524 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:37:46.524 CC lib/iscsi/param.o 00:37:46.524 CC lib/vhost/vhost.o 00:37:46.524 CC lib/vhost/vhost_rpc.o 00:37:46.524 CC lib/ftl/utils/ftl_conf.o 00:37:46.783 CC lib/ftl/utils/ftl_md.o 00:37:46.783 CC lib/vhost/vhost_scsi.o 00:37:46.783 CC lib/vhost/vhost_blk.o 00:37:46.783 CC lib/iscsi/portal_grp.o 00:37:47.041 CC lib/iscsi/tgt_node.o 00:37:47.041 CC lib/vhost/rte_vhost_user.o 00:37:47.041 LIB libspdk_nvmf.a 00:37:47.300 CC lib/iscsi/iscsi_subsystem.o 00:37:47.300 SO libspdk_nvmf.so.20.0 00:37:47.300 CC lib/ftl/utils/ftl_mempool.o 00:37:47.300 CC lib/ftl/utils/ftl_bitmap.o 00:37:47.300 SYMLINK libspdk_nvmf.so 00:37:47.300 CC lib/ftl/utils/ftl_property.o 00:37:47.300 CC lib/iscsi/iscsi_rpc.o 00:37:47.300 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:37:47.558 CC lib/iscsi/task.o 00:37:47.558 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:37:47.558 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:37:47.558 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:37:47.558 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:37:47.816 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:37:47.816 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:37:47.816 CC lib/ftl/upgrade/ftl_sb_v3.o 00:37:47.816 CC lib/ftl/upgrade/ftl_sb_v5.o 00:37:47.816 CC lib/ftl/nvc/ftl_nvc_dev.o 00:37:47.816 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:37:47.816 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:37:47.816 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:37:47.816 LIB libspdk_iscsi.a 00:37:47.816 CC lib/ftl/base/ftl_base_dev.o 00:37:48.074 CC lib/ftl/base/ftl_base_bdev.o 00:37:48.074 SO libspdk_iscsi.so.8.0 00:37:48.074 CC lib/ftl/ftl_trace.o 00:37:48.074 SYMLINK libspdk_iscsi.so 00:37:48.074 LIB libspdk_vhost.a 00:37:48.333 SO libspdk_vhost.so.8.0 00:37:48.333 LIB libspdk_ftl.a 00:37:48.333 SYMLINK libspdk_vhost.so 00:37:48.591 SO libspdk_ftl.so.9.0 00:37:48.850 SYMLINK libspdk_ftl.so 00:37:49.109 CC module/env_dpdk/env_dpdk_rpc.o 00:37:49.368 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:37:49.368 CC module/scheduler/gscheduler/gscheduler.o 00:37:49.368 CC module/sock/posix/posix.o 00:37:49.368 CC module/scheduler/dynamic/scheduler_dynamic.o 00:37:49.368 CC module/accel/ioat/accel_ioat.o 00:37:49.368 CC module/blob/bdev/blob_bdev.o 00:37:49.368 CC module/keyring/file/keyring.o 00:37:49.368 CC module/fsdev/aio/fsdev_aio.o 00:37:49.368 CC module/accel/error/accel_error.o 00:37:49.368 LIB libspdk_env_dpdk_rpc.a 00:37:49.368 SO libspdk_env_dpdk_rpc.so.6.0 00:37:49.368 SYMLINK libspdk_env_dpdk_rpc.so 00:37:49.368 CC module/accel/error/accel_error_rpc.o 00:37:49.368 LIB libspdk_scheduler_gscheduler.a 00:37:49.368 CC module/keyring/file/keyring_rpc.o 00:37:49.368 LIB libspdk_scheduler_dpdk_governor.a 00:37:49.368 SO libspdk_scheduler_gscheduler.so.4.0 00:37:49.368 SO libspdk_scheduler_dpdk_governor.so.4.0 00:37:49.368 CC module/accel/ioat/accel_ioat_rpc.o 00:37:49.368 LIB libspdk_scheduler_dynamic.a 00:37:49.627 SO libspdk_scheduler_dynamic.so.4.0 00:37:49.627 SYMLINK libspdk_scheduler_gscheduler.so 00:37:49.627 SYMLINK libspdk_scheduler_dpdk_governor.so 00:37:49.627 CC module/fsdev/aio/fsdev_aio_rpc.o 00:37:49.627 LIB libspdk_blob_bdev.a 00:37:49.627 LIB libspdk_accel_error.a 00:37:49.627 LIB libspdk_keyring_file.a 00:37:49.627 SYMLINK libspdk_scheduler_dynamic.so 00:37:49.627 SO libspdk_blob_bdev.so.12.0 00:37:49.627 SO libspdk_accel_error.so.2.0 00:37:49.627 SO libspdk_keyring_file.so.2.0 00:37:49.627 LIB libspdk_accel_ioat.a 00:37:49.627 SO libspdk_accel_ioat.so.6.0 00:37:49.627 SYMLINK libspdk_blob_bdev.so 00:37:49.627 SYMLINK libspdk_accel_error.so 00:37:49.627 SYMLINK libspdk_keyring_file.so 00:37:49.627 CC module/sock/uring/uring.o 00:37:49.627 CC module/keyring/linux/keyring.o 00:37:49.627 CC module/fsdev/aio/linux_aio_mgr.o 00:37:49.627 SYMLINK libspdk_accel_ioat.so 00:37:49.627 CC module/keyring/linux/keyring_rpc.o 00:37:49.886 CC module/accel/dsa/accel_dsa.o 00:37:49.886 CC module/accel/iaa/accel_iaa.o 00:37:49.886 CC module/accel/iaa/accel_iaa_rpc.o 00:37:49.886 LIB libspdk_keyring_linux.a 00:37:49.886 SO libspdk_keyring_linux.so.1.0 00:37:49.886 CC module/accel/dsa/accel_dsa_rpc.o 00:37:49.886 CC module/bdev/delay/vbdev_delay.o 00:37:49.886 CC module/blobfs/bdev/blobfs_bdev.o 00:37:49.886 SYMLINK libspdk_keyring_linux.so 00:37:49.886 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:37:49.886 LIB libspdk_fsdev_aio.a 00:37:50.144 LIB libspdk_sock_posix.a 00:37:50.144 SO libspdk_fsdev_aio.so.1.0 00:37:50.144 SO libspdk_sock_posix.so.6.0 00:37:50.144 LIB libspdk_accel_iaa.a 00:37:50.144 LIB libspdk_accel_dsa.a 00:37:50.144 SO libspdk_accel_iaa.so.3.0 00:37:50.144 SO libspdk_accel_dsa.so.5.0 00:37:50.144 SYMLINK libspdk_fsdev_aio.so 00:37:50.144 SYMLINK libspdk_sock_posix.so 00:37:50.144 CC module/bdev/delay/vbdev_delay_rpc.o 00:37:50.144 LIB libspdk_blobfs_bdev.a 00:37:50.144 SYMLINK libspdk_accel_iaa.so 00:37:50.144 SYMLINK libspdk_accel_dsa.so 00:37:50.144 CC module/bdev/error/vbdev_error.o 00:37:50.144 SO libspdk_blobfs_bdev.so.6.0 00:37:50.144 CC module/bdev/gpt/gpt.o 00:37:50.402 SYMLINK libspdk_blobfs_bdev.so 00:37:50.402 CC module/bdev/gpt/vbdev_gpt.o 00:37:50.402 CC module/bdev/malloc/bdev_malloc.o 00:37:50.402 CC module/bdev/lvol/vbdev_lvol.o 00:37:50.402 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:37:50.402 CC module/bdev/null/bdev_null.o 00:37:50.402 CC module/bdev/nvme/bdev_nvme.o 00:37:50.402 LIB libspdk_bdev_delay.a 00:37:50.402 LIB libspdk_sock_uring.a 00:37:50.402 SO libspdk_bdev_delay.so.6.0 00:37:50.402 SO libspdk_sock_uring.so.5.0 00:37:50.402 CC module/bdev/nvme/bdev_nvme_rpc.o 00:37:50.402 CC module/bdev/error/vbdev_error_rpc.o 00:37:50.402 SYMLINK libspdk_sock_uring.so 00:37:50.402 SYMLINK libspdk_bdev_delay.so 00:37:50.402 CC module/bdev/nvme/nvme_rpc.o 00:37:50.402 CC module/bdev/nvme/bdev_mdns_client.o 00:37:50.661 LIB libspdk_bdev_gpt.a 00:37:50.661 SO libspdk_bdev_gpt.so.6.0 00:37:50.661 CC module/bdev/null/bdev_null_rpc.o 00:37:50.661 LIB libspdk_bdev_error.a 00:37:50.661 SYMLINK libspdk_bdev_gpt.so 00:37:50.661 CC module/bdev/nvme/vbdev_opal.o 00:37:50.661 SO libspdk_bdev_error.so.6.0 00:37:50.661 CC module/bdev/malloc/bdev_malloc_rpc.o 00:37:50.661 CC module/bdev/nvme/vbdev_opal_rpc.o 00:37:50.661 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:37:50.919 SYMLINK libspdk_bdev_error.so 00:37:50.919 LIB libspdk_bdev_null.a 00:37:50.919 SO libspdk_bdev_null.so.6.0 00:37:50.919 LIB libspdk_bdev_lvol.a 00:37:50.919 CC module/bdev/passthru/vbdev_passthru.o 00:37:50.919 SO libspdk_bdev_lvol.so.6.0 00:37:50.919 SYMLINK libspdk_bdev_null.so 00:37:50.919 LIB libspdk_bdev_malloc.a 00:37:50.919 SO libspdk_bdev_malloc.so.6.0 00:37:50.919 CC module/bdev/raid/bdev_raid.o 00:37:50.919 CC module/bdev/raid/bdev_raid_rpc.o 00:37:50.919 SYMLINK libspdk_bdev_lvol.so 00:37:51.178 SYMLINK libspdk_bdev_malloc.so 00:37:51.178 CC module/bdev/split/vbdev_split.o 00:37:51.178 CC module/bdev/zone_block/vbdev_zone_block.o 00:37:51.178 CC module/bdev/uring/bdev_uring.o 00:37:51.178 CC module/bdev/aio/bdev_aio.o 00:37:51.178 CC module/bdev/ftl/bdev_ftl.o 00:37:51.178 CC module/bdev/ftl/bdev_ftl_rpc.o 00:37:51.178 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:37:51.178 CC module/bdev/iscsi/bdev_iscsi.o 00:37:51.437 LIB libspdk_bdev_passthru.a 00:37:51.437 CC module/bdev/split/vbdev_split_rpc.o 00:37:51.437 SO libspdk_bdev_passthru.so.6.0 00:37:51.437 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:37:51.437 LIB libspdk_bdev_ftl.a 00:37:51.437 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:37:51.695 SO libspdk_bdev_ftl.so.6.0 00:37:51.695 CC module/bdev/uring/bdev_uring_rpc.o 00:37:51.695 CC module/bdev/aio/bdev_aio_rpc.o 00:37:51.695 SYMLINK libspdk_bdev_passthru.so 00:37:51.695 CC module/bdev/raid/bdev_raid_sb.o 00:37:51.695 SYMLINK libspdk_bdev_ftl.so 00:37:51.695 LIB libspdk_bdev_split.a 00:37:51.695 CC module/bdev/raid/raid0.o 00:37:51.695 SO libspdk_bdev_split.so.6.0 00:37:51.695 LIB libspdk_bdev_iscsi.a 00:37:51.695 SO libspdk_bdev_iscsi.so.6.0 00:37:51.695 LIB libspdk_bdev_zone_block.a 00:37:51.695 SYMLINK libspdk_bdev_split.so 00:37:51.695 LIB libspdk_bdev_aio.a 00:37:51.695 LIB libspdk_bdev_uring.a 00:37:51.695 SO libspdk_bdev_zone_block.so.6.0 00:37:51.695 CC module/bdev/raid/raid1.o 00:37:51.695 SYMLINK libspdk_bdev_iscsi.so 00:37:51.695 CC module/bdev/raid/concat.o 00:37:51.695 CC module/bdev/virtio/bdev_virtio_scsi.o 00:37:51.695 SO libspdk_bdev_uring.so.6.0 00:37:51.695 SO libspdk_bdev_aio.so.6.0 00:37:51.954 SYMLINK libspdk_bdev_zone_block.so 00:37:51.954 CC module/bdev/virtio/bdev_virtio_blk.o 00:37:51.954 SYMLINK libspdk_bdev_aio.so 00:37:51.954 CC module/bdev/virtio/bdev_virtio_rpc.o 00:37:51.954 SYMLINK libspdk_bdev_uring.so 00:37:51.954 LIB libspdk_bdev_raid.a 00:37:52.213 SO libspdk_bdev_raid.so.6.0 00:37:52.213 SYMLINK libspdk_bdev_raid.so 00:37:52.472 LIB libspdk_bdev_virtio.a 00:37:52.472 SO libspdk_bdev_virtio.so.6.0 00:37:52.472 SYMLINK libspdk_bdev_virtio.so 00:37:53.040 LIB libspdk_bdev_nvme.a 00:37:53.040 SO libspdk_bdev_nvme.so.7.1 00:37:53.299 SYMLINK libspdk_bdev_nvme.so 00:37:53.558 CC module/event/subsystems/iobuf/iobuf.o 00:37:53.558 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:37:53.558 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:37:53.558 CC module/event/subsystems/scheduler/scheduler.o 00:37:53.558 CC module/event/subsystems/keyring/keyring.o 00:37:53.558 CC module/event/subsystems/fsdev/fsdev.o 00:37:53.558 CC module/event/subsystems/vmd/vmd.o 00:37:53.558 CC module/event/subsystems/sock/sock.o 00:37:53.816 CC module/event/subsystems/vmd/vmd_rpc.o 00:37:53.816 LIB libspdk_event_fsdev.a 00:37:53.816 LIB libspdk_event_keyring.a 00:37:53.816 LIB libspdk_event_vhost_blk.a 00:37:53.816 LIB libspdk_event_scheduler.a 00:37:53.816 SO libspdk_event_fsdev.so.1.0 00:37:53.816 SO libspdk_event_keyring.so.1.0 00:37:53.816 LIB libspdk_event_iobuf.a 00:37:53.816 LIB libspdk_event_sock.a 00:37:53.816 LIB libspdk_event_vmd.a 00:37:53.816 SO libspdk_event_vhost_blk.so.3.0 00:37:53.816 SO libspdk_event_scheduler.so.4.0 00:37:53.816 SO libspdk_event_iobuf.so.3.0 00:37:53.816 SO libspdk_event_sock.so.5.0 00:37:53.816 SYMLINK libspdk_event_keyring.so 00:37:53.816 SO libspdk_event_vmd.so.6.0 00:37:53.816 SYMLINK libspdk_event_fsdev.so 00:37:53.816 SYMLINK libspdk_event_vhost_blk.so 00:37:53.816 SYMLINK libspdk_event_scheduler.so 00:37:54.074 SYMLINK libspdk_event_sock.so 00:37:54.074 SYMLINK libspdk_event_iobuf.so 00:37:54.074 SYMLINK libspdk_event_vmd.so 00:37:54.074 CC module/event/subsystems/accel/accel.o 00:37:54.334 LIB libspdk_event_accel.a 00:37:54.334 SO libspdk_event_accel.so.6.0 00:37:54.334 SYMLINK libspdk_event_accel.so 00:37:54.903 CC module/event/subsystems/bdev/bdev.o 00:37:54.903 LIB libspdk_event_bdev.a 00:37:54.903 SO libspdk_event_bdev.so.6.0 00:37:54.903 SYMLINK libspdk_event_bdev.so 00:37:55.162 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:37:55.162 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:37:55.162 CC module/event/subsystems/scsi/scsi.o 00:37:55.162 CC module/event/subsystems/nbd/nbd.o 00:37:55.162 CC module/event/subsystems/ublk/ublk.o 00:37:55.421 LIB libspdk_event_nbd.a 00:37:55.421 LIB libspdk_event_scsi.a 00:37:55.421 SO libspdk_event_nbd.so.6.0 00:37:55.421 SO libspdk_event_scsi.so.6.0 00:37:55.421 LIB libspdk_event_ublk.a 00:37:55.421 SO libspdk_event_ublk.so.3.0 00:37:55.421 SYMLINK libspdk_event_nbd.so 00:37:55.421 SYMLINK libspdk_event_scsi.so 00:37:55.680 LIB libspdk_event_nvmf.a 00:37:55.680 SYMLINK libspdk_event_ublk.so 00:37:55.680 SO libspdk_event_nvmf.so.6.0 00:37:55.680 SYMLINK libspdk_event_nvmf.so 00:37:55.680 CC module/event/subsystems/iscsi/iscsi.o 00:37:55.680 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:37:55.939 LIB libspdk_event_vhost_scsi.a 00:37:55.939 LIB libspdk_event_iscsi.a 00:37:55.939 SO libspdk_event_vhost_scsi.so.3.0 00:37:55.939 SO libspdk_event_iscsi.so.6.0 00:37:56.199 SYMLINK libspdk_event_vhost_scsi.so 00:37:56.199 SYMLINK libspdk_event_iscsi.so 00:37:56.199 SO libspdk.so.6.0 00:37:56.199 SYMLINK libspdk.so 00:37:56.459 CC app/trace_record/trace_record.o 00:37:56.459 CXX app/trace/trace.o 00:37:56.459 CC app/spdk_lspci/spdk_lspci.o 00:37:56.459 CC app/spdk_nvme_perf/perf.o 00:37:56.459 CC app/nvmf_tgt/nvmf_main.o 00:37:56.459 CC app/iscsi_tgt/iscsi_tgt.o 00:37:56.459 CC examples/ioat/perf/perf.o 00:37:56.459 CC examples/util/zipf/zipf.o 00:37:56.719 CC test/thread/poller_perf/poller_perf.o 00:37:56.719 CC app/spdk_tgt/spdk_tgt.o 00:37:56.719 LINK spdk_lspci 00:37:56.719 LINK zipf 00:37:56.719 LINK spdk_trace_record 00:37:56.719 LINK poller_perf 00:37:56.719 LINK nvmf_tgt 00:37:56.978 LINK spdk_tgt 00:37:56.978 LINK ioat_perf 00:37:56.978 LINK iscsi_tgt 00:37:56.978 LINK spdk_trace 00:37:56.978 CC examples/interrupt_tgt/interrupt_tgt.o 00:37:56.978 CC app/spdk_nvme_identify/identify.o 00:37:56.978 CC app/spdk_nvme_discover/discovery_aer.o 00:37:57.237 CC app/spdk_top/spdk_top.o 00:37:57.237 CC examples/ioat/verify/verify.o 00:37:57.237 CC test/dma/test_dma/test_dma.o 00:37:57.237 LINK interrupt_tgt 00:37:57.237 CC app/spdk_dd/spdk_dd.o 00:37:57.237 LINK spdk_nvme_discover 00:37:57.237 CC app/fio/nvme/fio_plugin.o 00:37:57.496 LINK verify 00:37:57.496 CC test/app/bdev_svc/bdev_svc.o 00:37:57.496 LINK spdk_nvme_perf 00:37:57.496 TEST_HEADER include/spdk/accel.h 00:37:57.496 TEST_HEADER include/spdk/accel_module.h 00:37:57.496 TEST_HEADER include/spdk/assert.h 00:37:57.496 TEST_HEADER include/spdk/barrier.h 00:37:57.496 TEST_HEADER include/spdk/base64.h 00:37:57.496 TEST_HEADER include/spdk/bdev.h 00:37:57.496 TEST_HEADER include/spdk/bdev_module.h 00:37:57.496 TEST_HEADER include/spdk/bdev_zone.h 00:37:57.496 TEST_HEADER include/spdk/bit_array.h 00:37:57.496 TEST_HEADER include/spdk/bit_pool.h 00:37:57.496 TEST_HEADER include/spdk/blob_bdev.h 00:37:57.496 TEST_HEADER include/spdk/blobfs_bdev.h 00:37:57.496 TEST_HEADER include/spdk/blobfs.h 00:37:57.496 TEST_HEADER include/spdk/blob.h 00:37:57.496 TEST_HEADER include/spdk/conf.h 00:37:57.496 TEST_HEADER include/spdk/config.h 00:37:57.496 TEST_HEADER include/spdk/cpuset.h 00:37:57.496 TEST_HEADER include/spdk/crc16.h 00:37:57.496 TEST_HEADER include/spdk/crc32.h 00:37:57.496 TEST_HEADER include/spdk/crc64.h 00:37:57.496 TEST_HEADER include/spdk/dif.h 00:37:57.496 TEST_HEADER include/spdk/dma.h 00:37:57.496 TEST_HEADER include/spdk/endian.h 00:37:57.496 TEST_HEADER include/spdk/env_dpdk.h 00:37:57.496 TEST_HEADER include/spdk/env.h 00:37:57.496 TEST_HEADER include/spdk/event.h 00:37:57.496 TEST_HEADER include/spdk/fd_group.h 00:37:57.496 TEST_HEADER include/spdk/fd.h 00:37:57.496 TEST_HEADER include/spdk/file.h 00:37:57.496 TEST_HEADER include/spdk/fsdev.h 00:37:57.496 TEST_HEADER include/spdk/fsdev_module.h 00:37:57.496 TEST_HEADER include/spdk/ftl.h 00:37:57.496 TEST_HEADER include/spdk/fuse_dispatcher.h 00:37:57.496 TEST_HEADER include/spdk/gpt_spec.h 00:37:57.496 TEST_HEADER include/spdk/hexlify.h 00:37:57.496 TEST_HEADER include/spdk/histogram_data.h 00:37:57.497 TEST_HEADER include/spdk/idxd.h 00:37:57.497 TEST_HEADER include/spdk/idxd_spec.h 00:37:57.497 TEST_HEADER include/spdk/init.h 00:37:57.497 TEST_HEADER include/spdk/ioat.h 00:37:57.497 TEST_HEADER include/spdk/ioat_spec.h 00:37:57.497 TEST_HEADER include/spdk/iscsi_spec.h 00:37:57.497 TEST_HEADER include/spdk/json.h 00:37:57.497 TEST_HEADER include/spdk/jsonrpc.h 00:37:57.497 TEST_HEADER include/spdk/keyring.h 00:37:57.497 TEST_HEADER include/spdk/keyring_module.h 00:37:57.497 TEST_HEADER include/spdk/likely.h 00:37:57.497 TEST_HEADER include/spdk/log.h 00:37:57.497 TEST_HEADER include/spdk/lvol.h 00:37:57.497 TEST_HEADER include/spdk/md5.h 00:37:57.497 TEST_HEADER include/spdk/memory.h 00:37:57.497 TEST_HEADER include/spdk/mmio.h 00:37:57.497 TEST_HEADER include/spdk/nbd.h 00:37:57.497 TEST_HEADER include/spdk/net.h 00:37:57.497 TEST_HEADER include/spdk/notify.h 00:37:57.497 TEST_HEADER include/spdk/nvme.h 00:37:57.497 TEST_HEADER include/spdk/nvme_intel.h 00:37:57.497 TEST_HEADER include/spdk/nvme_ocssd.h 00:37:57.497 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:37:57.497 TEST_HEADER include/spdk/nvme_spec.h 00:37:57.497 TEST_HEADER include/spdk/nvme_zns.h 00:37:57.497 TEST_HEADER include/spdk/nvmf_cmd.h 00:37:57.497 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:37:57.497 TEST_HEADER include/spdk/nvmf.h 00:37:57.497 TEST_HEADER include/spdk/nvmf_spec.h 00:37:57.497 TEST_HEADER include/spdk/nvmf_transport.h 00:37:57.497 TEST_HEADER include/spdk/opal.h 00:37:57.497 LINK bdev_svc 00:37:57.497 TEST_HEADER include/spdk/opal_spec.h 00:37:57.755 TEST_HEADER include/spdk/pci_ids.h 00:37:57.755 TEST_HEADER include/spdk/pipe.h 00:37:57.755 CC examples/thread/thread/thread_ex.o 00:37:57.755 TEST_HEADER include/spdk/queue.h 00:37:57.755 TEST_HEADER include/spdk/reduce.h 00:37:57.755 TEST_HEADER include/spdk/rpc.h 00:37:57.755 TEST_HEADER include/spdk/scheduler.h 00:37:57.755 TEST_HEADER include/spdk/scsi.h 00:37:57.755 TEST_HEADER include/spdk/scsi_spec.h 00:37:57.755 TEST_HEADER include/spdk/sock.h 00:37:57.755 TEST_HEADER include/spdk/stdinc.h 00:37:57.755 CC app/fio/bdev/fio_plugin.o 00:37:57.755 TEST_HEADER include/spdk/string.h 00:37:57.755 TEST_HEADER include/spdk/thread.h 00:37:57.755 TEST_HEADER include/spdk/trace.h 00:37:57.755 TEST_HEADER include/spdk/trace_parser.h 00:37:57.755 TEST_HEADER include/spdk/tree.h 00:37:57.755 TEST_HEADER include/spdk/ublk.h 00:37:57.755 TEST_HEADER include/spdk/util.h 00:37:57.755 TEST_HEADER include/spdk/uuid.h 00:37:57.755 TEST_HEADER include/spdk/version.h 00:37:57.755 TEST_HEADER include/spdk/vfio_user_pci.h 00:37:57.755 TEST_HEADER include/spdk/vfio_user_spec.h 00:37:57.755 TEST_HEADER include/spdk/vhost.h 00:37:57.755 TEST_HEADER include/spdk/vmd.h 00:37:57.755 LINK spdk_dd 00:37:57.755 TEST_HEADER include/spdk/xor.h 00:37:57.755 TEST_HEADER include/spdk/zipf.h 00:37:57.756 CXX test/cpp_headers/accel.o 00:37:57.756 LINK test_dma 00:37:57.756 CC examples/sock/hello_world/hello_sock.o 00:37:57.756 LINK spdk_nvme_identify 00:37:58.014 CXX test/cpp_headers/accel_module.o 00:37:58.014 LINK spdk_nvme 00:37:58.014 LINK thread 00:37:58.014 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:37:58.014 LINK spdk_top 00:37:58.014 CC app/vhost/vhost.o 00:37:58.014 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:37:58.014 LINK hello_sock 00:37:58.014 CXX test/cpp_headers/assert.o 00:37:58.014 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:37:58.273 LINK spdk_bdev 00:37:58.273 CC examples/vmd/lsvmd/lsvmd.o 00:37:58.273 LINK vhost 00:37:58.273 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:37:58.273 CC examples/vmd/led/led.o 00:37:58.273 CXX test/cpp_headers/barrier.o 00:37:58.273 CC examples/idxd/perf/perf.o 00:37:58.532 CC test/env/vtophys/vtophys.o 00:37:58.532 LINK lsvmd 00:37:58.532 LINK led 00:37:58.532 CXX test/cpp_headers/base64.o 00:37:58.532 LINK nvme_fuzz 00:37:58.532 CC test/env/mem_callbacks/mem_callbacks.o 00:37:58.532 LINK vtophys 00:37:58.532 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:37:58.532 CXX test/cpp_headers/bdev.o 00:37:58.532 CXX test/cpp_headers/bdev_module.o 00:37:58.791 LINK idxd_perf 00:37:58.791 LINK vhost_fuzz 00:37:58.791 CXX test/cpp_headers/bdev_zone.o 00:37:58.791 CC test/env/memory/memory_ut.o 00:37:58.791 CC test/env/pci/pci_ut.o 00:37:58.791 LINK env_dpdk_post_init 00:37:58.791 CXX test/cpp_headers/bit_array.o 00:37:58.791 CXX test/cpp_headers/bit_pool.o 00:37:59.049 CC examples/accel/perf/accel_perf.o 00:37:59.049 CC examples/fsdev/hello_world/hello_fsdev.o 00:37:59.049 CXX test/cpp_headers/blob_bdev.o 00:37:59.049 CC test/app/histogram_perf/histogram_perf.o 00:37:59.049 CC test/event/event_perf/event_perf.o 00:37:59.049 CC examples/blob/hello_world/hello_blob.o 00:37:59.049 LINK pci_ut 00:37:59.308 LINK mem_callbacks 00:37:59.308 CXX test/cpp_headers/blobfs_bdev.o 00:37:59.308 LINK histogram_perf 00:37:59.308 LINK event_perf 00:37:59.308 LINK hello_fsdev 00:37:59.308 LINK hello_blob 00:37:59.567 CC examples/blob/cli/blobcli.o 00:37:59.567 CXX test/cpp_headers/blobfs.o 00:37:59.567 LINK accel_perf 00:37:59.567 CC test/app/jsoncat/jsoncat.o 00:37:59.567 CC test/event/reactor/reactor.o 00:37:59.567 CC examples/nvme/hello_world/hello_world.o 00:37:59.567 CC test/app/stub/stub.o 00:37:59.567 CXX test/cpp_headers/blob.o 00:37:59.567 LINK jsoncat 00:37:59.826 LINK iscsi_fuzz 00:37:59.826 LINK reactor 00:37:59.826 CC test/rpc_client/rpc_client_test.o 00:37:59.826 CC test/nvme/aer/aer.o 00:37:59.826 LINK stub 00:37:59.826 CXX test/cpp_headers/conf.o 00:37:59.826 LINK hello_world 00:37:59.826 CC test/event/reactor_perf/reactor_perf.o 00:38:00.085 LINK blobcli 00:38:00.085 LINK rpc_client_test 00:38:00.085 CXX test/cpp_headers/config.o 00:38:00.085 LINK memory_ut 00:38:00.085 CXX test/cpp_headers/cpuset.o 00:38:00.085 CC test/accel/dif/dif.o 00:38:00.085 CC examples/nvme/reconnect/reconnect.o 00:38:00.085 LINK reactor_perf 00:38:00.085 LINK aer 00:38:00.085 CC test/blobfs/mkfs/mkfs.o 00:38:00.085 CC test/event/app_repeat/app_repeat.o 00:38:00.344 CXX test/cpp_headers/crc16.o 00:38:00.344 LINK app_repeat 00:38:00.344 CC test/event/scheduler/scheduler.o 00:38:00.344 LINK mkfs 00:38:00.344 CC test/nvme/reset/reset.o 00:38:00.344 CC examples/bdev/hello_world/hello_bdev.o 00:38:00.344 CC examples/bdev/bdevperf/bdevperf.o 00:38:00.344 LINK reconnect 00:38:00.344 CC test/lvol/esnap/esnap.o 00:38:00.344 CXX test/cpp_headers/crc32.o 00:38:00.603 CC examples/nvme/nvme_manage/nvme_manage.o 00:38:00.603 LINK scheduler 00:38:00.603 CXX test/cpp_headers/crc64.o 00:38:00.603 LINK reset 00:38:00.603 LINK hello_bdev 00:38:00.603 CC examples/nvme/arbitration/arbitration.o 00:38:00.603 CC test/nvme/sgl/sgl.o 00:38:00.603 LINK dif 00:38:00.862 CXX test/cpp_headers/dif.o 00:38:00.862 CC examples/nvme/hotplug/hotplug.o 00:38:00.862 CC test/nvme/e2edp/nvme_dp.o 00:38:00.862 CC examples/nvme/cmb_copy/cmb_copy.o 00:38:00.862 LINK sgl 00:38:00.862 CXX test/cpp_headers/dma.o 00:38:01.121 LINK arbitration 00:38:01.121 CC examples/nvme/abort/abort.o 00:38:01.121 LINK nvme_manage 00:38:01.121 CXX test/cpp_headers/endian.o 00:38:01.121 LINK cmb_copy 00:38:01.121 LINK hotplug 00:38:01.121 LINK nvme_dp 00:38:01.380 LINK bdevperf 00:38:01.380 CC test/nvme/overhead/overhead.o 00:38:01.380 CXX test/cpp_headers/env_dpdk.o 00:38:01.380 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:38:01.380 CXX test/cpp_headers/env.o 00:38:01.380 CXX test/cpp_headers/event.o 00:38:01.380 CXX test/cpp_headers/fd_group.o 00:38:01.380 CC test/nvme/err_injection/err_injection.o 00:38:01.380 LINK abort 00:38:01.380 LINK pmr_persistence 00:38:01.639 CXX test/cpp_headers/fd.o 00:38:01.639 CC test/nvme/startup/startup.o 00:38:01.639 CC test/nvme/reserve/reserve.o 00:38:01.639 CXX test/cpp_headers/file.o 00:38:01.639 CXX test/cpp_headers/fsdev.o 00:38:01.639 LINK overhead 00:38:01.639 LINK err_injection 00:38:01.639 CC test/nvme/simple_copy/simple_copy.o 00:38:01.639 LINK startup 00:38:01.897 CXX test/cpp_headers/fsdev_module.o 00:38:01.897 LINK reserve 00:38:01.897 CC test/nvme/boot_partition/boot_partition.o 00:38:01.897 CC test/nvme/connect_stress/connect_stress.o 00:38:01.897 CC test/nvme/compliance/nvme_compliance.o 00:38:01.897 CC test/bdev/bdevio/bdevio.o 00:38:01.897 CC examples/nvmf/nvmf/nvmf.o 00:38:01.897 LINK simple_copy 00:38:01.897 CXX test/cpp_headers/ftl.o 00:38:01.897 CC test/nvme/fused_ordering/fused_ordering.o 00:38:02.156 CC test/nvme/doorbell_aers/doorbell_aers.o 00:38:02.156 LINK connect_stress 00:38:02.156 LINK boot_partition 00:38:02.156 CXX test/cpp_headers/fuse_dispatcher.o 00:38:02.156 LINK fused_ordering 00:38:02.156 LINK nvme_compliance 00:38:02.156 CXX test/cpp_headers/gpt_spec.o 00:38:02.415 LINK doorbell_aers 00:38:02.415 CC test/nvme/fdp/fdp.o 00:38:02.415 LINK nvmf 00:38:02.415 CC test/nvme/cuse/cuse.o 00:38:02.415 LINK bdevio 00:38:02.415 CXX test/cpp_headers/hexlify.o 00:38:02.415 CXX test/cpp_headers/histogram_data.o 00:38:02.415 CXX test/cpp_headers/idxd.o 00:38:02.415 CXX test/cpp_headers/idxd_spec.o 00:38:02.415 CXX test/cpp_headers/init.o 00:38:02.674 CXX test/cpp_headers/ioat.o 00:38:02.674 CXX test/cpp_headers/ioat_spec.o 00:38:02.674 CXX test/cpp_headers/iscsi_spec.o 00:38:02.674 CXX test/cpp_headers/json.o 00:38:02.674 CXX test/cpp_headers/jsonrpc.o 00:38:02.674 CXX test/cpp_headers/keyring.o 00:38:02.674 LINK fdp 00:38:02.674 CXX test/cpp_headers/keyring_module.o 00:38:02.674 CXX test/cpp_headers/likely.o 00:38:02.674 CXX test/cpp_headers/log.o 00:38:02.674 CXX test/cpp_headers/lvol.o 00:38:02.932 CXX test/cpp_headers/md5.o 00:38:02.932 CXX test/cpp_headers/memory.o 00:38:02.932 CXX test/cpp_headers/mmio.o 00:38:02.932 CXX test/cpp_headers/nbd.o 00:38:02.932 CXX test/cpp_headers/net.o 00:38:02.932 CXX test/cpp_headers/notify.o 00:38:02.932 CXX test/cpp_headers/nvme.o 00:38:02.932 CXX test/cpp_headers/nvme_intel.o 00:38:02.932 CXX test/cpp_headers/nvme_ocssd.o 00:38:02.932 CXX test/cpp_headers/nvme_ocssd_spec.o 00:38:02.932 CXX test/cpp_headers/nvme_spec.o 00:38:03.191 CXX test/cpp_headers/nvme_zns.o 00:38:03.191 CXX test/cpp_headers/nvmf_cmd.o 00:38:03.191 CXX test/cpp_headers/nvmf_fc_spec.o 00:38:03.191 CXX test/cpp_headers/nvmf.o 00:38:03.191 CXX test/cpp_headers/nvmf_spec.o 00:38:03.191 CXX test/cpp_headers/nvmf_transport.o 00:38:03.191 CXX test/cpp_headers/opal.o 00:38:03.191 CXX test/cpp_headers/opal_spec.o 00:38:03.191 CXX test/cpp_headers/pci_ids.o 00:38:03.191 CXX test/cpp_headers/pipe.o 00:38:03.449 CXX test/cpp_headers/queue.o 00:38:03.449 CXX test/cpp_headers/reduce.o 00:38:03.449 CXX test/cpp_headers/rpc.o 00:38:03.449 CXX test/cpp_headers/scheduler.o 00:38:03.449 CXX test/cpp_headers/scsi.o 00:38:03.449 CXX test/cpp_headers/scsi_spec.o 00:38:03.449 CXX test/cpp_headers/sock.o 00:38:03.449 CXX test/cpp_headers/stdinc.o 00:38:03.449 CXX test/cpp_headers/string.o 00:38:03.449 CXX test/cpp_headers/thread.o 00:38:03.449 CXX test/cpp_headers/trace.o 00:38:03.708 CXX test/cpp_headers/trace_parser.o 00:38:03.708 CXX test/cpp_headers/tree.o 00:38:03.708 CXX test/cpp_headers/ublk.o 00:38:03.708 CXX test/cpp_headers/util.o 00:38:03.708 CXX test/cpp_headers/uuid.o 00:38:03.708 CXX test/cpp_headers/version.o 00:38:03.708 CXX test/cpp_headers/vfio_user_pci.o 00:38:03.708 CXX test/cpp_headers/vfio_user_spec.o 00:38:03.708 CXX test/cpp_headers/vhost.o 00:38:03.708 CXX test/cpp_headers/vmd.o 00:38:03.708 LINK cuse 00:38:03.708 CXX test/cpp_headers/xor.o 00:38:03.967 CXX test/cpp_headers/zipf.o 00:38:05.872 LINK esnap 00:38:06.442 00:38:06.442 real 1m32.137s 00:38:06.442 user 8m34.105s 00:38:06.442 sys 1m33.188s 00:38:06.442 09:50:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:38:06.442 09:50:13 make -- common/autotest_common.sh@10 -- $ set +x 00:38:06.442 ************************************ 00:38:06.442 END TEST make 00:38:06.442 ************************************ 00:38:06.442 09:50:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:38:06.442 09:50:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:06.442 09:50:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:06.442 09:50:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:06.442 09:50:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:38:06.442 09:50:13 -- pm/common@44 -- $ pid=5298 00:38:06.442 09:50:13 -- pm/common@50 -- $ kill -TERM 5298 00:38:06.442 09:50:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:06.442 09:50:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:38:06.442 09:50:13 -- pm/common@44 -- $ pid=5299 00:38:06.442 09:50:13 -- pm/common@50 -- $ kill -TERM 5299 00:38:06.442 09:50:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:38:06.442 09:50:13 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:38:06.442 09:50:13 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:06.442 09:50:13 -- common/autotest_common.sh@1711 -- # lcov --version 00:38:06.442 09:50:13 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:06.442 09:50:13 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:06.442 09:50:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.442 09:50:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.442 09:50:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.442 09:50:13 -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.442 09:50:13 -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.442 09:50:13 -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.442 09:50:13 -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.442 09:50:13 -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.442 09:50:13 -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.442 09:50:13 -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.442 09:50:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.442 09:50:13 -- scripts/common.sh@344 -- # case "$op" in 00:38:06.442 09:50:13 -- scripts/common.sh@345 -- # : 1 00:38:06.442 09:50:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.442 09:50:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.442 09:50:13 -- scripts/common.sh@365 -- # decimal 1 00:38:06.442 09:50:13 -- scripts/common.sh@353 -- # local d=1 00:38:06.442 09:50:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.442 09:50:13 -- scripts/common.sh@355 -- # echo 1 00:38:06.442 09:50:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.442 09:50:13 -- scripts/common.sh@366 -- # decimal 2 00:38:06.442 09:50:13 -- scripts/common.sh@353 -- # local d=2 00:38:06.442 09:50:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.442 09:50:13 -- scripts/common.sh@355 -- # echo 2 00:38:06.442 09:50:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.442 09:50:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.442 09:50:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.442 09:50:13 -- scripts/common.sh@368 -- # return 0 00:38:06.442 09:50:13 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.442 09:50:13 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.442 --rc genhtml_branch_coverage=1 00:38:06.442 --rc genhtml_function_coverage=1 00:38:06.442 --rc genhtml_legend=1 00:38:06.442 --rc geninfo_all_blocks=1 00:38:06.442 --rc geninfo_unexecuted_blocks=1 00:38:06.442 00:38:06.442 ' 00:38:06.442 09:50:13 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.442 --rc genhtml_branch_coverage=1 00:38:06.442 --rc genhtml_function_coverage=1 00:38:06.442 --rc genhtml_legend=1 00:38:06.442 --rc geninfo_all_blocks=1 00:38:06.442 --rc geninfo_unexecuted_blocks=1 00:38:06.442 00:38:06.442 ' 00:38:06.442 09:50:13 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.442 --rc genhtml_branch_coverage=1 00:38:06.442 --rc genhtml_function_coverage=1 00:38:06.442 --rc genhtml_legend=1 00:38:06.442 --rc geninfo_all_blocks=1 00:38:06.442 --rc geninfo_unexecuted_blocks=1 00:38:06.442 00:38:06.442 ' 00:38:06.442 09:50:13 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.443 --rc genhtml_branch_coverage=1 00:38:06.443 --rc genhtml_function_coverage=1 00:38:06.443 --rc genhtml_legend=1 00:38:06.443 --rc geninfo_all_blocks=1 00:38:06.443 --rc geninfo_unexecuted_blocks=1 00:38:06.443 00:38:06.443 ' 00:38:06.443 09:50:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:06.443 09:50:13 -- nvmf/common.sh@7 -- # uname -s 00:38:06.443 09:50:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.443 09:50:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.443 09:50:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.443 09:50:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.443 09:50:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.443 09:50:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.443 09:50:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.443 09:50:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.443 09:50:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.443 09:50:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.443 09:50:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:38:06.443 09:50:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:38:06.443 09:50:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.443 09:50:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.443 09:50:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:06.443 09:50:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.443 09:50:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:06.443 09:50:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.443 09:50:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.443 09:50:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.443 09:50:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.443 09:50:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.443 09:50:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.443 09:50:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.443 09:50:13 -- paths/export.sh@5 -- # export PATH 00:38:06.443 09:50:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.443 09:50:13 -- nvmf/common.sh@51 -- # : 0 00:38:06.443 09:50:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.443 09:50:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.443 09:50:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.443 09:50:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.443 09:50:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.443 09:50:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:06.443 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:06.443 09:50:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.443 09:50:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.443 09:50:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.443 09:50:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:38:06.443 09:50:13 -- spdk/autotest.sh@32 -- # uname -s 00:38:06.443 09:50:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:38:06.443 09:50:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:38:06.443 09:50:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:38:06.443 09:50:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:38:06.443 09:50:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:38:06.443 09:50:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:38:06.702 09:50:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:38:06.702 09:50:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:38:06.702 09:50:13 -- spdk/autotest.sh@48 -- # udevadm_pid=54410 00:38:06.702 09:50:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:38:06.702 09:50:13 -- pm/common@17 -- # local monitor 00:38:06.702 09:50:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:38:06.702 09:50:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:38:06.702 09:50:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:38:06.702 09:50:13 -- pm/common@25 -- # sleep 1 00:38:06.702 09:50:13 -- pm/common@21 -- # date +%s 00:38:06.702 09:50:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733737813 00:38:06.702 09:50:13 -- pm/common@21 -- # date +%s 00:38:06.702 09:50:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733737813 00:38:06.702 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733737813_collect-vmstat.pm.log 00:38:06.702 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733737813_collect-cpu-load.pm.log 00:38:07.641 09:50:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:38:07.641 09:50:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:38:07.641 09:50:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:07.641 09:50:14 -- common/autotest_common.sh@10 -- # set +x 00:38:07.641 09:50:14 -- spdk/autotest.sh@59 -- # create_test_list 00:38:07.641 09:50:14 -- common/autotest_common.sh@752 -- # xtrace_disable 00:38:07.641 09:50:14 -- common/autotest_common.sh@10 -- # set +x 00:38:07.641 09:50:15 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:38:07.641 09:50:15 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:38:07.641 09:50:15 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:38:07.641 09:50:15 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:38:07.641 09:50:15 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:38:07.641 09:50:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:38:07.641 09:50:15 -- common/autotest_common.sh@1457 -- # uname 00:38:07.641 09:50:15 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:38:07.641 09:50:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:38:07.641 09:50:15 -- common/autotest_common.sh@1477 -- # uname 00:38:07.641 09:50:15 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:38:07.641 09:50:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:38:07.641 09:50:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:38:07.641 lcov: LCOV version 1.15 00:38:07.641 09:50:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:38:25.723 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:38:25.723 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:38:43.928 09:50:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:38:43.928 09:50:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:43.928 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:38:43.928 09:50:48 -- spdk/autotest.sh@78 -- # rm -f 00:38:43.928 09:50:48 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:43.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:43.928 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:43.928 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:43.928 09:50:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:38:43.928 09:50:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:38:43.928 09:50:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:38:43.928 09:50:49 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:38:43.928 09:50:49 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:38:43.928 09:50:49 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:38:43.928 09:50:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:38:43.928 09:50:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:38:43.928 09:50:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:43.928 09:50:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:38:43.928 09:50:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:43.928 09:50:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:43.928 09:50:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:43.928 09:50:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:38:43.928 09:50:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:38:43.928 09:50:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:43.928 09:50:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:38:43.928 09:50:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:38:43.928 09:50:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:38:43.928 09:50:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:43.928 09:50:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:43.928 09:50:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:38:43.928 09:50:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:38:43.928 09:50:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:38:43.928 09:50:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:43.928 09:50:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:38:43.928 09:50:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:38:43.928 09:50:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:38:43.928 09:50:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:38:43.928 09:50:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:43.928 09:50:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:38:43.928 09:50:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:38:43.928 09:50:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:38:43.928 09:50:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:38:43.928 09:50:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:38:43.928 09:50:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:38:43.928 No valid GPT data, bailing 00:38:43.928 09:50:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:43.928 09:50:49 -- scripts/common.sh@394 -- # pt= 00:38:43.928 09:50:49 -- scripts/common.sh@395 -- # return 1 00:38:43.928 09:50:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:38:43.928 1+0 records in 00:38:43.928 1+0 records out 00:38:43.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00375144 s, 280 MB/s 00:38:43.928 09:50:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:38:43.928 09:50:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:38:43.928 09:50:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:38:43.928 09:50:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:38:43.928 09:50:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:38:43.928 No valid GPT data, bailing 00:38:43.928 09:50:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:38:43.928 09:50:49 -- scripts/common.sh@394 -- # pt= 00:38:43.928 09:50:49 -- scripts/common.sh@395 -- # return 1 00:38:43.928 09:50:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:38:43.928 1+0 records in 00:38:43.928 1+0 records out 00:38:43.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476849 s, 220 MB/s 00:38:43.928 09:50:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:38:43.928 09:50:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:38:43.928 09:50:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:38:43.928 09:50:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:38:43.928 09:50:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:38:43.928 No valid GPT data, bailing 00:38:43.928 09:50:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:38:43.928 09:50:49 -- scripts/common.sh@394 -- # pt= 00:38:43.928 09:50:49 -- scripts/common.sh@395 -- # return 1 00:38:43.928 09:50:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:38:43.928 1+0 records in 00:38:43.928 1+0 records out 00:38:43.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470486 s, 223 MB/s 00:38:43.928 09:50:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:38:43.928 09:50:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:38:43.928 09:50:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:38:43.928 09:50:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:38:43.928 09:50:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:38:43.928 No valid GPT data, bailing 00:38:43.928 09:50:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:38:43.928 09:50:49 -- scripts/common.sh@394 -- # pt= 00:38:43.928 09:50:49 -- scripts/common.sh@395 -- # return 1 00:38:43.928 09:50:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:38:43.928 1+0 records in 00:38:43.928 1+0 records out 00:38:43.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461135 s, 227 MB/s 00:38:43.928 09:50:49 -- spdk/autotest.sh@105 -- # sync 00:38:43.928 09:50:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:38:43.928 09:50:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:38:43.928 09:50:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:38:44.497 09:50:51 -- spdk/autotest.sh@111 -- # uname -s 00:38:44.497 09:50:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:38:44.497 09:50:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:38:44.497 09:50:51 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:38:45.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:45.064 Hugepages 00:38:45.064 node hugesize free / total 00:38:45.064 node0 1048576kB 0 / 0 00:38:45.064 node0 2048kB 0 / 0 00:38:45.064 00:38:45.064 Type BDF Vendor Device NUMA Driver Device Block devices 00:38:45.064 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:38:45.323 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:38:45.323 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:38:45.323 09:50:52 -- spdk/autotest.sh@117 -- # uname -s 00:38:45.323 09:50:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:38:45.323 09:50:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:38:45.323 09:50:52 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:45.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:46.150 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:46.150 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:38:46.150 09:50:53 -- common/autotest_common.sh@1517 -- # sleep 1 00:38:47.098 09:50:54 -- common/autotest_common.sh@1518 -- # bdfs=() 00:38:47.098 09:50:54 -- common/autotest_common.sh@1518 -- # local bdfs 00:38:47.098 09:50:54 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:38:47.098 09:50:54 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:38:47.098 09:50:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:38:47.098 09:50:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:38:47.098 09:50:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:47.098 09:50:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:47.098 09:50:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:38:47.357 09:50:54 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:38:47.357 09:50:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:38:47.357 09:50:54 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:47.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:47.616 Waiting for block devices as requested 00:38:47.616 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:47.616 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:47.878 09:50:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:38:47.878 09:50:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:38:47.878 09:50:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:38:47.878 09:50:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:38:47.878 09:50:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:38:47.878 09:50:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:38:47.878 09:50:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:38:47.878 09:50:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:38:47.878 09:50:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:38:47.878 09:50:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:38:47.878 09:50:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:38:47.878 09:50:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:38:47.878 09:50:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:38:47.878 09:50:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:38:47.878 09:50:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:38:47.878 09:50:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:38:47.878 09:50:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:38:47.878 09:50:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:38:47.878 09:50:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:38:47.878 09:50:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:38:47.878 09:50:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:38:47.878 09:50:55 -- common/autotest_common.sh@1543 -- # continue 00:38:47.878 09:50:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:38:47.878 09:50:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:38:47.878 09:50:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:38:47.878 09:50:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:38:47.878 09:50:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:38:47.878 09:50:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:38:47.878 09:50:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:38:47.878 09:50:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:38:47.878 09:50:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:38:47.878 09:50:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:38:47.878 09:50:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:38:47.878 09:50:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:38:47.878 09:50:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:38:47.878 09:50:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:38:47.878 09:50:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:38:47.878 09:50:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:38:47.878 09:50:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:38:47.878 09:50:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:38:47.878 09:50:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:38:47.878 09:50:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:38:47.878 09:50:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:38:47.878 09:50:55 -- common/autotest_common.sh@1543 -- # continue 00:38:47.878 09:50:55 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:38:47.878 09:50:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.878 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:38:47.878 09:50:55 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:38:47.878 09:50:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.878 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:38:47.878 09:50:55 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:48.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:48.706 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:48.706 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:38:48.706 09:50:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:38:48.706 09:50:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.706 09:50:56 -- common/autotest_common.sh@10 -- # set +x 00:38:48.706 09:50:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:38:48.706 09:50:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:38:48.706 09:50:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:38:48.706 09:50:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:38:48.706 09:50:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:38:48.706 09:50:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:38:48.706 09:50:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:38:48.706 09:50:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:38:48.706 09:50:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:38:48.706 09:50:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:38:48.706 09:50:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:48.706 09:50:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:48.706 09:50:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:38:48.966 09:50:56 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:38:48.966 09:50:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:38:48.966 09:50:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:38:48.966 09:50:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:38:48.966 09:50:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:38:48.966 09:50:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:38:48.966 09:50:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:38:48.966 09:50:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:38:48.966 09:50:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:38:48.966 09:50:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:38:48.966 09:50:56 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:38:48.966 09:50:56 -- common/autotest_common.sh@1572 -- # return 0 00:38:48.966 09:50:56 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:38:48.966 09:50:56 -- common/autotest_common.sh@1580 -- # return 0 00:38:48.966 09:50:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:38:48.966 09:50:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:38:48.966 09:50:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:38:48.966 09:50:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:38:48.966 09:50:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:38:48.966 09:50:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:48.966 09:50:56 -- common/autotest_common.sh@10 -- # set +x 00:38:48.966 09:50:56 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:38:48.966 09:50:56 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:38:48.966 09:50:56 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:38:48.966 09:50:56 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:38:48.966 09:50:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:48.966 09:50:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.966 09:50:56 -- common/autotest_common.sh@10 -- # set +x 00:38:48.966 ************************************ 00:38:48.966 START TEST env 00:38:48.966 ************************************ 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:38:48.966 * Looking for test storage... 00:38:48.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1711 -- # lcov --version 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:48.966 09:50:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:48.966 09:50:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:48.966 09:50:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:48.966 09:50:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:38:48.966 09:50:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:38:48.966 09:50:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:38:48.966 09:50:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:38:48.966 09:50:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:38:48.966 09:50:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:38:48.966 09:50:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:38:48.966 09:50:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:48.966 09:50:56 env -- scripts/common.sh@344 -- # case "$op" in 00:38:48.966 09:50:56 env -- scripts/common.sh@345 -- # : 1 00:38:48.966 09:50:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:48.966 09:50:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:48.966 09:50:56 env -- scripts/common.sh@365 -- # decimal 1 00:38:48.966 09:50:56 env -- scripts/common.sh@353 -- # local d=1 00:38:48.966 09:50:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:48.966 09:50:56 env -- scripts/common.sh@355 -- # echo 1 00:38:48.966 09:50:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:38:48.966 09:50:56 env -- scripts/common.sh@366 -- # decimal 2 00:38:48.966 09:50:56 env -- scripts/common.sh@353 -- # local d=2 00:38:48.966 09:50:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:48.966 09:50:56 env -- scripts/common.sh@355 -- # echo 2 00:38:48.966 09:50:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:38:48.966 09:50:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:48.966 09:50:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:48.966 09:50:56 env -- scripts/common.sh@368 -- # return 0 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:48.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.966 --rc genhtml_branch_coverage=1 00:38:48.966 --rc genhtml_function_coverage=1 00:38:48.966 --rc genhtml_legend=1 00:38:48.966 --rc geninfo_all_blocks=1 00:38:48.966 --rc geninfo_unexecuted_blocks=1 00:38:48.966 00:38:48.966 ' 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:48.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.966 --rc genhtml_branch_coverage=1 00:38:48.966 --rc genhtml_function_coverage=1 00:38:48.966 --rc genhtml_legend=1 00:38:48.966 --rc geninfo_all_blocks=1 00:38:48.966 --rc geninfo_unexecuted_blocks=1 00:38:48.966 00:38:48.966 ' 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:48.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.966 --rc genhtml_branch_coverage=1 00:38:48.966 --rc genhtml_function_coverage=1 00:38:48.966 --rc genhtml_legend=1 00:38:48.966 --rc geninfo_all_blocks=1 00:38:48.966 --rc geninfo_unexecuted_blocks=1 00:38:48.966 00:38:48.966 ' 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:48.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.966 --rc genhtml_branch_coverage=1 00:38:48.966 --rc genhtml_function_coverage=1 00:38:48.966 --rc genhtml_legend=1 00:38:48.966 --rc geninfo_all_blocks=1 00:38:48.966 --rc geninfo_unexecuted_blocks=1 00:38:48.966 00:38:48.966 ' 00:38:48.966 09:50:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:48.966 09:50:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.966 09:50:56 env -- common/autotest_common.sh@10 -- # set +x 00:38:49.226 ************************************ 00:38:49.226 START TEST env_memory 00:38:49.226 ************************************ 00:38:49.226 09:50:56 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:38:49.226 00:38:49.226 00:38:49.226 CUnit - A unit testing framework for C - Version 2.1-3 00:38:49.226 http://cunit.sourceforge.net/ 00:38:49.226 00:38:49.226 00:38:49.226 Suite: mem_map_2mb 00:38:49.226 Test: alloc and free memory map ...[2024-12-09 09:50:56.519689] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:38:49.226 passed 00:38:49.226 Test: mem map translation ...[2024-12-09 09:50:56.553602] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:38:49.226 [2024-12-09 09:50:56.553990] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:38:49.226 [2024-12-09 09:50:56.554250] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:38:49.226 [2024-12-09 09:50:56.554399] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:38:49.226 passed 00:38:49.226 Test: mem map registration ...[2024-12-09 09:50:56.631320] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:38:49.226 [2024-12-09 09:50:56.631671] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:38:49.226 passed 00:38:49.486 Test: mem map adjacent registrations ...passed 00:38:49.486 Suite: mem_map_4kb 00:38:49.486 Test: alloc and free memory map ...[2024-12-09 09:50:56.820736] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:38:49.486 passed 00:38:49.486 Test: mem map translation ...[2024-12-09 09:50:56.861258] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234 00:38:49.486 [2024-12-09 09:50:56.861308] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096 00:38:49.486 [2024-12-09 09:50:56.885261] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:38:49.486 [2024-12-09 09:50:56.885299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map 00:38:49.486 passed 00:38:49.486 Test: mem map registration ...[2024-12-09 09:50:56.982397] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234 00:38:49.486 [2024-12-09 09:50:56.982466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096 00:38:49.746 passed 00:38:49.746 Test: mem map adjacent registrations ...passed 00:38:49.746 00:38:49.746 Run Summary: Type Total Ran Passed Failed Inactive 00:38:49.746 suites 2 2 n/a 0 0 00:38:49.746 tests 8 8 8 0 0 00:38:49.746 asserts 304 304 304 0 n/a 00:38:49.746 00:38:49.746 Elapsed time = 0.605 seconds 00:38:49.746 00:38:49.746 real 0m0.625s 00:38:49.746 user 0m0.596s 00:38:49.746 sys 0m0.021s 00:38:49.746 ************************************ 00:38:49.746 END TEST env_memory 00:38:49.746 ************************************ 00:38:49.746 09:50:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.746 09:50:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:38:49.746 09:50:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:38:49.746 09:50:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:49.746 09:50:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.746 09:50:57 env -- common/autotest_common.sh@10 -- # set +x 00:38:49.746 ************************************ 00:38:49.746 START TEST env_vtophys 00:38:49.746 ************************************ 00:38:49.746 09:50:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:38:49.746 EAL: lib.eal log level changed from notice to debug 00:38:49.746 EAL: Detected lcore 0 as core 0 on socket 0 00:38:49.746 EAL: Detected lcore 1 as core 0 on socket 0 00:38:49.746 EAL: Detected lcore 2 as core 0 on socket 0 00:38:49.746 EAL: Detected lcore 3 as core 0 on socket 0 00:38:49.746 EAL: Detected lcore 4 as core 0 on socket 0 00:38:49.746 EAL: Detected lcore 5 as core 0 on socket 0 00:38:49.746 EAL: Detected lcore 6 as core 0 on socket 0 00:38:49.746 EAL: Detected lcore 7 as core 0 on socket 0 00:38:49.746 EAL: Detected lcore 8 as core 0 on socket 0 00:38:49.746 EAL: Detected lcore 9 as core 0 on socket 0 00:38:49.746 EAL: Maximum logical cores by configuration: 128 00:38:49.746 EAL: Detected CPU lcores: 10 00:38:49.746 EAL: Detected NUMA nodes: 1 00:38:49.746 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:38:49.747 EAL: Detected shared linkage of DPDK 00:38:49.747 EAL: No shared files mode enabled, IPC will be disabled 00:38:49.747 EAL: Selected IOVA mode 'PA' 00:38:49.747 EAL: Probing VFIO support... 00:38:49.747 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:38:49.747 EAL: VFIO modules not loaded, skipping VFIO support... 00:38:49.747 EAL: Ask a virtual area of 0x2e000 bytes 00:38:49.747 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:38:49.747 EAL: Setting up physically contiguous memory... 00:38:49.747 EAL: Setting maximum number of open files to 524288 00:38:49.747 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:38:49.747 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:38:49.747 EAL: Ask a virtual area of 0x61000 bytes 00:38:49.747 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:38:49.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:38:49.747 EAL: Ask a virtual area of 0x400000000 bytes 00:38:49.747 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:38:49.747 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:38:49.747 EAL: Ask a virtual area of 0x61000 bytes 00:38:49.747 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:38:49.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:38:49.747 EAL: Ask a virtual area of 0x400000000 bytes 00:38:49.747 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:38:49.747 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:38:49.747 EAL: Ask a virtual area of 0x61000 bytes 00:38:49.747 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:38:49.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:38:49.747 EAL: Ask a virtual area of 0x400000000 bytes 00:38:49.747 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:38:49.747 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:38:49.747 EAL: Ask a virtual area of 0x61000 bytes 00:38:49.747 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:38:49.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:38:49.747 EAL: Ask a virtual area of 0x400000000 bytes 00:38:49.747 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:38:49.747 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:38:49.747 EAL: Hugepages will be freed exactly as allocated. 00:38:49.747 EAL: No shared files mode enabled, IPC is disabled 00:38:49.747 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: TSC frequency is ~2200000 KHz 00:38:50.007 EAL: Main lcore 0 is ready (tid=7fbeac60da00;cpuset=[0]) 00:38:50.007 EAL: Trying to obtain current memory policy. 00:38:50.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.007 EAL: Restoring previous memory policy: 0 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was expanded by 2MB 00:38:50.007 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:38:50.007 EAL: No PCI address specified using 'addr=' in: bus=pci 00:38:50.007 EAL: Mem event callback 'spdk:(nil)' registered 00:38:50.007 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:38:50.007 00:38:50.007 00:38:50.007 CUnit - A unit testing framework for C - Version 2.1-3 00:38:50.007 http://cunit.sourceforge.net/ 00:38:50.007 00:38:50.007 00:38:50.007 Suite: components_suite 00:38:50.007 Test: vtophys_malloc_test ...passed 00:38:50.007 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:38:50.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.007 EAL: Restoring previous memory policy: 4 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was expanded by 4MB 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was shrunk by 4MB 00:38:50.007 EAL: Trying to obtain current memory policy. 00:38:50.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.007 EAL: Restoring previous memory policy: 4 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was expanded by 6MB 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was shrunk by 6MB 00:38:50.007 EAL: Trying to obtain current memory policy. 00:38:50.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.007 EAL: Restoring previous memory policy: 4 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was expanded by 10MB 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was shrunk by 10MB 00:38:50.007 EAL: Trying to obtain current memory policy. 00:38:50.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.007 EAL: Restoring previous memory policy: 4 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was expanded by 18MB 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was shrunk by 18MB 00:38:50.007 EAL: Trying to obtain current memory policy. 00:38:50.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.007 EAL: Restoring previous memory policy: 4 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was expanded by 34MB 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.007 EAL: No shared files mode enabled, IPC is disabled 00:38:50.007 EAL: Heap on socket 0 was shrunk by 34MB 00:38:50.007 EAL: Trying to obtain current memory policy. 00:38:50.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.007 EAL: Restoring previous memory policy: 4 00:38:50.007 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.007 EAL: request: mp_malloc_sync 00:38:50.008 EAL: No shared files mode enabled, IPC is disabled 00:38:50.008 EAL: Heap on socket 0 was expanded by 66MB 00:38:50.008 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.008 EAL: request: mp_malloc_sync 00:38:50.008 EAL: No shared files mode enabled, IPC is disabled 00:38:50.008 EAL: Heap on socket 0 was shrunk by 66MB 00:38:50.008 EAL: Trying to obtain current memory policy. 00:38:50.008 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.008 EAL: Restoring previous memory policy: 4 00:38:50.008 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.008 EAL: request: mp_malloc_sync 00:38:50.008 EAL: No shared files mode enabled, IPC is disabled 00:38:50.008 EAL: Heap on socket 0 was expanded by 130MB 00:38:50.008 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.008 EAL: request: mp_malloc_sync 00:38:50.008 EAL: No shared files mode enabled, IPC is disabled 00:38:50.008 EAL: Heap on socket 0 was shrunk by 130MB 00:38:50.008 EAL: Trying to obtain current memory policy. 00:38:50.008 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.008 EAL: Restoring previous memory policy: 4 00:38:50.008 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.008 EAL: request: mp_malloc_sync 00:38:50.008 EAL: No shared files mode enabled, IPC is disabled 00:38:50.008 EAL: Heap on socket 0 was expanded by 258MB 00:38:50.008 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.008 EAL: request: mp_malloc_sync 00:38:50.008 EAL: No shared files mode enabled, IPC is disabled 00:38:50.008 EAL: Heap on socket 0 was shrunk by 258MB 00:38:50.008 EAL: Trying to obtain current memory policy. 00:38:50.008 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.268 EAL: Restoring previous memory policy: 4 00:38:50.268 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.268 EAL: request: mp_malloc_sync 00:38:50.268 EAL: No shared files mode enabled, IPC is disabled 00:38:50.268 EAL: Heap on socket 0 was expanded by 514MB 00:38:50.268 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.268 EAL: request: mp_malloc_sync 00:38:50.268 EAL: No shared files mode enabled, IPC is disabled 00:38:50.268 EAL: Heap on socket 0 was shrunk by 514MB 00:38:50.268 EAL: Trying to obtain current memory policy. 00:38:50.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:38:50.527 EAL: Restoring previous memory policy: 4 00:38:50.527 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.527 EAL: request: mp_malloc_sync 00:38:50.527 EAL: No shared files mode enabled, IPC is disabled 00:38:50.527 EAL: Heap on socket 0 was expanded by 1026MB 00:38:50.527 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.788 passed 00:38:50.788 00:38:50.788 Run Summary: Type Total Ran Passed Failed Inactive 00:38:50.788 suites 1 1 n/a 0 0 00:38:50.788 tests 2 2 2 0 0 00:38:50.788 asserts 5512 5512 5512 0 n/a 00:38:50.788 00:38:50.788 Elapsed time = 0.688 seconds 00:38:50.788 EAL: request: mp_malloc_sync 00:38:50.788 EAL: No shared files mode enabled, IPC is disabled 00:38:50.788 EAL: Heap on socket 0 was shrunk by 1026MB 00:38:50.788 EAL: Calling mem event callback 'spdk:(nil)' 00:38:50.788 EAL: request: mp_malloc_sync 00:38:50.788 EAL: No shared files mode enabled, IPC is disabled 00:38:50.788 EAL: Heap on socket 0 was shrunk by 2MB 00:38:50.788 EAL: No shared files mode enabled, IPC is disabled 00:38:50.788 EAL: No shared files mode enabled, IPC is disabled 00:38:50.788 EAL: No shared files mode enabled, IPC is disabled 00:38:50.788 00:38:50.788 real 0m0.894s 00:38:50.788 user 0m0.467s 00:38:50.788 sys 0m0.297s 00:38:50.788 ************************************ 00:38:50.788 END TEST env_vtophys 00:38:50.788 ************************************ 00:38:50.788 09:50:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:50.788 09:50:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:38:50.788 09:50:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:38:50.788 09:50:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:50.788 09:50:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:50.788 09:50:58 env -- common/autotest_common.sh@10 -- # set +x 00:38:50.788 ************************************ 00:38:50.788 START TEST env_pci 00:38:50.788 ************************************ 00:38:50.788 09:50:58 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:38:50.788 00:38:50.788 00:38:50.788 CUnit - A unit testing framework for C - Version 2.1-3 00:38:50.788 http://cunit.sourceforge.net/ 00:38:50.788 00:38:50.788 00:38:50.788 Suite: pci 00:38:50.788 Test: pci_hook ...[2024-12-09 09:50:58.107252] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56650 has claimed it 00:38:50.788 passed 00:38:50.788 00:38:50.788 Run Summary: Type Total Ran Passed Failed Inactive 00:38:50.788 suites 1 1 n/a 0 0 00:38:50.788 tests 1 1 1 0 0 00:38:50.788 asserts 25 25 25 0 n/a 00:38:50.788 00:38:50.788 Elapsed time = 0.002 seconds 00:38:50.788 EAL: Cannot find device (10000:00:01.0) 00:38:50.788 EAL: Failed to attach device on primary process 00:38:50.788 00:38:50.788 real 0m0.021s 00:38:50.788 user 0m0.007s 00:38:50.788 sys 0m0.013s 00:38:50.788 09:50:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:50.788 ************************************ 00:38:50.788 END TEST env_pci 00:38:50.788 ************************************ 00:38:50.788 09:50:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:38:50.788 09:50:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:38:50.788 09:50:58 env -- env/env.sh@15 -- # uname 00:38:50.788 09:50:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:38:50.788 09:50:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:38:50.788 09:50:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:38:50.788 09:50:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:38:50.788 09:50:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:50.788 09:50:58 env -- common/autotest_common.sh@10 -- # set +x 00:38:50.788 ************************************ 00:38:50.788 START TEST env_dpdk_post_init 00:38:50.788 ************************************ 00:38:50.788 09:50:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:38:50.788 EAL: Detected CPU lcores: 10 00:38:50.789 EAL: Detected NUMA nodes: 1 00:38:50.789 EAL: Detected shared linkage of DPDK 00:38:50.789 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:38:50.789 EAL: Selected IOVA mode 'PA' 00:38:51.049 TELEMETRY: No legacy callbacks, legacy socket not created 00:38:51.049 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:38:51.049 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:38:51.049 Starting DPDK initialization... 00:38:51.049 Starting SPDK post initialization... 00:38:51.049 SPDK NVMe probe 00:38:51.049 Attaching to 0000:00:10.0 00:38:51.049 Attaching to 0000:00:11.0 00:38:51.049 Attached to 0000:00:10.0 00:38:51.049 Attached to 0000:00:11.0 00:38:51.049 Cleaning up... 00:38:51.049 00:38:51.049 real 0m0.194s 00:38:51.049 user 0m0.060s 00:38:51.049 sys 0m0.035s 00:38:51.049 09:50:58 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.049 ************************************ 00:38:51.049 END TEST env_dpdk_post_init 00:38:51.049 ************************************ 00:38:51.049 09:50:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:38:51.049 09:50:58 env -- env/env.sh@26 -- # uname 00:38:51.049 09:50:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:38:51.049 09:50:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:38:51.049 09:50:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:51.049 09:50:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.049 09:50:58 env -- common/autotest_common.sh@10 -- # set +x 00:38:51.049 ************************************ 00:38:51.049 START TEST env_mem_callbacks 00:38:51.049 ************************************ 00:38:51.049 09:50:58 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:38:51.049 EAL: Detected CPU lcores: 10 00:38:51.049 EAL: Detected NUMA nodes: 1 00:38:51.049 EAL: Detected shared linkage of DPDK 00:38:51.049 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:38:51.049 EAL: Selected IOVA mode 'PA' 00:38:51.414 TELEMETRY: No legacy callbacks, legacy socket not created 00:38:51.414 00:38:51.414 00:38:51.414 CUnit - A unit testing framework for C - Version 2.1-3 00:38:51.414 http://cunit.sourceforge.net/ 00:38:51.414 00:38:51.414 00:38:51.414 Suite: memory 00:38:51.414 Test: test ... 00:38:51.414 register 0x200000200000 2097152 00:38:51.414 malloc 3145728 00:38:51.414 register 0x200000400000 4194304 00:38:51.414 buf 0x200000500000 len 3145728 PASSED 00:38:51.414 malloc 64 00:38:51.414 buf 0x2000004fff40 len 64 PASSED 00:38:51.415 malloc 4194304 00:38:51.415 register 0x200000800000 6291456 00:38:51.415 buf 0x200000a00000 len 4194304 PASSED 00:38:51.415 free 0x200000500000 3145728 00:38:51.415 free 0x2000004fff40 64 00:38:51.415 unregister 0x200000400000 4194304 PASSED 00:38:51.415 free 0x200000a00000 4194304 00:38:51.415 unregister 0x200000800000 6291456 PASSED 00:38:51.415 malloc 8388608 00:38:51.415 register 0x200000400000 10485760 00:38:51.415 buf 0x200000600000 len 8388608 PASSED 00:38:51.415 free 0x200000600000 8388608 00:38:51.415 unregister 0x200000400000 10485760 PASSED 00:38:51.415 passed 00:38:51.415 00:38:51.415 Run Summary: Type Total Ran Passed Failed Inactive 00:38:51.415 suites 1 1 n/a 0 0 00:38:51.415 tests 1 1 1 0 0 00:38:51.415 asserts 15 15 15 0 n/a 00:38:51.415 00:38:51.415 Elapsed time = 0.005 seconds 00:38:51.415 00:38:51.415 real 0m0.138s 00:38:51.415 user 0m0.019s 00:38:51.415 sys 0m0.018s 00:38:51.415 09:50:58 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.415 09:50:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:38:51.415 ************************************ 00:38:51.415 END TEST env_mem_callbacks 00:38:51.415 ************************************ 00:38:51.415 00:38:51.415 real 0m2.341s 00:38:51.415 user 0m1.346s 00:38:51.415 sys 0m0.644s 00:38:51.415 09:50:58 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.415 ************************************ 00:38:51.415 END TEST env 00:38:51.415 ************************************ 00:38:51.415 09:50:58 env -- common/autotest_common.sh@10 -- # set +x 00:38:51.415 09:50:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:38:51.415 09:50:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:51.415 09:50:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.415 09:50:58 -- common/autotest_common.sh@10 -- # set +x 00:38:51.415 ************************************ 00:38:51.415 START TEST rpc 00:38:51.415 ************************************ 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:38:51.415 * Looking for test storage... 00:38:51.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:51.415 09:50:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:51.415 09:50:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:51.415 09:50:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:51.415 09:50:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:38:51.415 09:50:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:38:51.415 09:50:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:38:51.415 09:50:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:38:51.415 09:50:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:38:51.415 09:50:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:38:51.415 09:50:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:38:51.415 09:50:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:51.415 09:50:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:38:51.415 09:50:58 rpc -- scripts/common.sh@345 -- # : 1 00:38:51.415 09:50:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:51.415 09:50:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:51.415 09:50:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:38:51.415 09:50:58 rpc -- scripts/common.sh@353 -- # local d=1 00:38:51.415 09:50:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:51.415 09:50:58 rpc -- scripts/common.sh@355 -- # echo 1 00:38:51.415 09:50:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:38:51.415 09:50:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:38:51.415 09:50:58 rpc -- scripts/common.sh@353 -- # local d=2 00:38:51.415 09:50:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:51.415 09:50:58 rpc -- scripts/common.sh@355 -- # echo 2 00:38:51.415 09:50:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:38:51.415 09:50:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:51.415 09:50:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:51.415 09:50:58 rpc -- scripts/common.sh@368 -- # return 0 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.415 --rc genhtml_branch_coverage=1 00:38:51.415 --rc genhtml_function_coverage=1 00:38:51.415 --rc genhtml_legend=1 00:38:51.415 --rc geninfo_all_blocks=1 00:38:51.415 --rc geninfo_unexecuted_blocks=1 00:38:51.415 00:38:51.415 ' 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.415 --rc genhtml_branch_coverage=1 00:38:51.415 --rc genhtml_function_coverage=1 00:38:51.415 --rc genhtml_legend=1 00:38:51.415 --rc geninfo_all_blocks=1 00:38:51.415 --rc geninfo_unexecuted_blocks=1 00:38:51.415 00:38:51.415 ' 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.415 --rc genhtml_branch_coverage=1 00:38:51.415 --rc genhtml_function_coverage=1 00:38:51.415 --rc genhtml_legend=1 00:38:51.415 --rc geninfo_all_blocks=1 00:38:51.415 --rc geninfo_unexecuted_blocks=1 00:38:51.415 00:38:51.415 ' 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:51.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.415 --rc genhtml_branch_coverage=1 00:38:51.415 --rc genhtml_function_coverage=1 00:38:51.415 --rc genhtml_legend=1 00:38:51.415 --rc geninfo_all_blocks=1 00:38:51.415 --rc geninfo_unexecuted_blocks=1 00:38:51.415 00:38:51.415 ' 00:38:51.415 09:50:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56767 00:38:51.415 09:50:58 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:38:51.415 09:50:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:38:51.415 09:50:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56767 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@835 -- # '[' -z 56767 ']' 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:51.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:51.415 09:50:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:38:51.415 [2024-12-09 09:50:58.897119] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:38:51.415 [2024-12-09 09:50:58.897248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56767 ] 00:38:51.676 [2024-12-09 09:50:59.045629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.676 [2024-12-09 09:50:59.086808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:38:51.676 [2024-12-09 09:50:59.086901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56767' to capture a snapshot of events at runtime. 00:38:51.676 [2024-12-09 09:50:59.086929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:51.676 [2024-12-09 09:50:59.086936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:51.676 [2024-12-09 09:50:59.086943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56767 for offline analysis/debug. 00:38:51.676 [2024-12-09 09:50:59.087344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.676 [2024-12-09 09:50:59.136017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:51.936 09:50:59 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:51.936 09:50:59 rpc -- common/autotest_common.sh@868 -- # return 0 00:38:51.936 09:50:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:38:51.937 09:50:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:38:51.937 09:50:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:38:51.937 09:50:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:38:51.937 09:50:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:51.937 09:50:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.937 09:50:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:38:51.937 ************************************ 00:38:51.937 START TEST rpc_integrity 00:38:51.937 ************************************ 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:38:51.937 { 00:38:51.937 "name": "Malloc0", 00:38:51.937 "aliases": [ 00:38:51.937 "1680947e-efe7-45e1-923b-3ac1483203a4" 00:38:51.937 ], 00:38:51.937 "product_name": "Malloc disk", 00:38:51.937 "block_size": 512, 00:38:51.937 "num_blocks": 16384, 00:38:51.937 "uuid": "1680947e-efe7-45e1-923b-3ac1483203a4", 00:38:51.937 "assigned_rate_limits": { 00:38:51.937 "rw_ios_per_sec": 0, 00:38:51.937 "rw_mbytes_per_sec": 0, 00:38:51.937 "r_mbytes_per_sec": 0, 00:38:51.937 "w_mbytes_per_sec": 0 00:38:51.937 }, 00:38:51.937 "claimed": false, 00:38:51.937 "zoned": false, 00:38:51.937 "supported_io_types": { 00:38:51.937 "read": true, 00:38:51.937 "write": true, 00:38:51.937 "unmap": true, 00:38:51.937 "flush": true, 00:38:51.937 "reset": true, 00:38:51.937 "nvme_admin": false, 00:38:51.937 "nvme_io": false, 00:38:51.937 "nvme_io_md": false, 00:38:51.937 "write_zeroes": true, 00:38:51.937 "zcopy": true, 00:38:51.937 "get_zone_info": false, 00:38:51.937 "zone_management": false, 00:38:51.937 "zone_append": false, 00:38:51.937 "compare": false, 00:38:51.937 "compare_and_write": false, 00:38:51.937 "abort": true, 00:38:51.937 "seek_hole": false, 00:38:51.937 "seek_data": false, 00:38:51.937 "copy": true, 00:38:51.937 "nvme_iov_md": false 00:38:51.937 }, 00:38:51.937 "memory_domains": [ 00:38:51.937 { 00:38:51.937 "dma_device_id": "system", 00:38:51.937 "dma_device_type": 1 00:38:51.937 }, 00:38:51.937 { 00:38:51.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:51.937 "dma_device_type": 2 00:38:51.937 } 00:38:51.937 ], 00:38:51.937 "driver_specific": {} 00:38:51.937 } 00:38:51.937 ]' 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:38:51.937 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.937 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.198 [2024-12-09 09:50:59.440011] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:38:52.198 [2024-12-09 09:50:59.440088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:52.198 [2024-12-09 09:50:59.440109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2344cb0 00:38:52.198 [2024-12-09 09:50:59.440119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:52.198 [2024-12-09 09:50:59.441833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:52.198 [2024-12-09 09:50:59.441886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:38:52.198 Passthru0 00:38:52.198 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.198 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:38:52.198 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.198 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.198 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.198 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:38:52.198 { 00:38:52.198 "name": "Malloc0", 00:38:52.198 "aliases": [ 00:38:52.198 "1680947e-efe7-45e1-923b-3ac1483203a4" 00:38:52.198 ], 00:38:52.198 "product_name": "Malloc disk", 00:38:52.198 "block_size": 512, 00:38:52.198 "num_blocks": 16384, 00:38:52.198 "uuid": "1680947e-efe7-45e1-923b-3ac1483203a4", 00:38:52.198 "assigned_rate_limits": { 00:38:52.198 "rw_ios_per_sec": 0, 00:38:52.198 "rw_mbytes_per_sec": 0, 00:38:52.198 "r_mbytes_per_sec": 0, 00:38:52.198 "w_mbytes_per_sec": 0 00:38:52.198 }, 00:38:52.198 "claimed": true, 00:38:52.198 "claim_type": "exclusive_write", 00:38:52.198 "zoned": false, 00:38:52.198 "supported_io_types": { 00:38:52.198 "read": true, 00:38:52.198 "write": true, 00:38:52.198 "unmap": true, 00:38:52.198 "flush": true, 00:38:52.198 "reset": true, 00:38:52.198 "nvme_admin": false, 00:38:52.198 "nvme_io": false, 00:38:52.198 "nvme_io_md": false, 00:38:52.198 "write_zeroes": true, 00:38:52.198 "zcopy": true, 00:38:52.198 "get_zone_info": false, 00:38:52.198 "zone_management": false, 00:38:52.198 "zone_append": false, 00:38:52.198 "compare": false, 00:38:52.198 "compare_and_write": false, 00:38:52.198 "abort": true, 00:38:52.198 "seek_hole": false, 00:38:52.198 "seek_data": false, 00:38:52.198 "copy": true, 00:38:52.198 "nvme_iov_md": false 00:38:52.198 }, 00:38:52.198 "memory_domains": [ 00:38:52.198 { 00:38:52.198 "dma_device_id": "system", 00:38:52.198 "dma_device_type": 1 00:38:52.198 }, 00:38:52.198 { 00:38:52.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.198 "dma_device_type": 2 00:38:52.198 } 00:38:52.198 ], 00:38:52.198 "driver_specific": {} 00:38:52.198 }, 00:38:52.198 { 00:38:52.198 "name": "Passthru0", 00:38:52.198 "aliases": [ 00:38:52.198 "e5bcd879-6710-5b33-87d2-5096d15aee0a" 00:38:52.198 ], 00:38:52.198 "product_name": "passthru", 00:38:52.198 "block_size": 512, 00:38:52.198 "num_blocks": 16384, 00:38:52.198 "uuid": "e5bcd879-6710-5b33-87d2-5096d15aee0a", 00:38:52.198 "assigned_rate_limits": { 00:38:52.198 "rw_ios_per_sec": 0, 00:38:52.198 "rw_mbytes_per_sec": 0, 00:38:52.198 "r_mbytes_per_sec": 0, 00:38:52.198 "w_mbytes_per_sec": 0 00:38:52.198 }, 00:38:52.198 "claimed": false, 00:38:52.198 "zoned": false, 00:38:52.198 "supported_io_types": { 00:38:52.198 "read": true, 00:38:52.198 "write": true, 00:38:52.198 "unmap": true, 00:38:52.198 "flush": true, 00:38:52.198 "reset": true, 00:38:52.198 "nvme_admin": false, 00:38:52.198 "nvme_io": false, 00:38:52.198 "nvme_io_md": false, 00:38:52.198 "write_zeroes": true, 00:38:52.198 "zcopy": true, 00:38:52.198 "get_zone_info": false, 00:38:52.198 "zone_management": false, 00:38:52.198 "zone_append": false, 00:38:52.198 "compare": false, 00:38:52.198 "compare_and_write": false, 00:38:52.198 "abort": true, 00:38:52.198 "seek_hole": false, 00:38:52.198 "seek_data": false, 00:38:52.198 "copy": true, 00:38:52.198 "nvme_iov_md": false 00:38:52.198 }, 00:38:52.198 "memory_domains": [ 00:38:52.198 { 00:38:52.198 "dma_device_id": "system", 00:38:52.198 "dma_device_type": 1 00:38:52.198 }, 00:38:52.198 { 00:38:52.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.198 "dma_device_type": 2 00:38:52.198 } 00:38:52.198 ], 00:38:52.198 "driver_specific": { 00:38:52.198 "passthru": { 00:38:52.198 "name": "Passthru0", 00:38:52.198 "base_bdev_name": "Malloc0" 00:38:52.198 } 00:38:52.198 } 00:38:52.198 } 00:38:52.198 ]' 00:38:52.199 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:38:52.199 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:38:52.199 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.199 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.199 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.199 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:38:52.199 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:38:52.199 09:50:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:38:52.199 00:38:52.199 real 0m0.306s 00:38:52.199 user 0m0.208s 00:38:52.199 sys 0m0.034s 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.199 09:50:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.199 ************************************ 00:38:52.199 END TEST rpc_integrity 00:38:52.199 ************************************ 00:38:52.199 09:50:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:38:52.199 09:50:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:52.199 09:50:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.199 09:50:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:38:52.199 ************************************ 00:38:52.199 START TEST rpc_plugins 00:38:52.199 ************************************ 00:38:52.199 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:38:52.199 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:38:52.199 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.199 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:38:52.199 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.199 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:38:52.199 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:38:52.199 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.199 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:38:52.199 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.199 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:38:52.199 { 00:38:52.199 "name": "Malloc1", 00:38:52.199 "aliases": [ 00:38:52.199 "9e7f33aa-3145-4f03-8b5f-ce36196adfa3" 00:38:52.199 ], 00:38:52.199 "product_name": "Malloc disk", 00:38:52.199 "block_size": 4096, 00:38:52.199 "num_blocks": 256, 00:38:52.199 "uuid": "9e7f33aa-3145-4f03-8b5f-ce36196adfa3", 00:38:52.199 "assigned_rate_limits": { 00:38:52.199 "rw_ios_per_sec": 0, 00:38:52.199 "rw_mbytes_per_sec": 0, 00:38:52.199 "r_mbytes_per_sec": 0, 00:38:52.199 "w_mbytes_per_sec": 0 00:38:52.199 }, 00:38:52.199 "claimed": false, 00:38:52.199 "zoned": false, 00:38:52.199 "supported_io_types": { 00:38:52.199 "read": true, 00:38:52.199 "write": true, 00:38:52.199 "unmap": true, 00:38:52.199 "flush": true, 00:38:52.199 "reset": true, 00:38:52.199 "nvme_admin": false, 00:38:52.199 "nvme_io": false, 00:38:52.199 "nvme_io_md": false, 00:38:52.199 "write_zeroes": true, 00:38:52.199 "zcopy": true, 00:38:52.199 "get_zone_info": false, 00:38:52.199 "zone_management": false, 00:38:52.199 "zone_append": false, 00:38:52.199 "compare": false, 00:38:52.199 "compare_and_write": false, 00:38:52.199 "abort": true, 00:38:52.199 "seek_hole": false, 00:38:52.199 "seek_data": false, 00:38:52.199 "copy": true, 00:38:52.199 "nvme_iov_md": false 00:38:52.199 }, 00:38:52.199 "memory_domains": [ 00:38:52.199 { 00:38:52.199 "dma_device_id": "system", 00:38:52.199 "dma_device_type": 1 00:38:52.199 }, 00:38:52.199 { 00:38:52.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.199 "dma_device_type": 2 00:38:52.199 } 00:38:52.199 ], 00:38:52.199 "driver_specific": {} 00:38:52.199 } 00:38:52.199 ]' 00:38:52.199 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:38:52.460 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:38:52.460 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:38:52.460 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.460 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:38:52.460 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.460 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:38:52.460 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.460 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:38:52.460 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.460 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:38:52.460 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:38:52.460 09:50:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:38:52.460 00:38:52.460 real 0m0.167s 00:38:52.460 user 0m0.110s 00:38:52.460 sys 0m0.022s 00:38:52.460 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.460 ************************************ 00:38:52.460 END TEST rpc_plugins 00:38:52.460 ************************************ 00:38:52.460 09:50:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:38:52.460 09:50:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:38:52.460 09:50:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:52.460 09:50:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.460 09:50:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:38:52.460 ************************************ 00:38:52.460 START TEST rpc_trace_cmd_test 00:38:52.460 ************************************ 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:38:52.460 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56767", 00:38:52.460 "tpoint_group_mask": "0x8", 00:38:52.460 "iscsi_conn": { 00:38:52.460 "mask": "0x2", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "scsi": { 00:38:52.460 "mask": "0x4", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "bdev": { 00:38:52.460 "mask": "0x8", 00:38:52.460 "tpoint_mask": "0xffffffffffffffff" 00:38:52.460 }, 00:38:52.460 "nvmf_rdma": { 00:38:52.460 "mask": "0x10", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "nvmf_tcp": { 00:38:52.460 "mask": "0x20", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "ftl": { 00:38:52.460 "mask": "0x40", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "blobfs": { 00:38:52.460 "mask": "0x80", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "dsa": { 00:38:52.460 "mask": "0x200", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "thread": { 00:38:52.460 "mask": "0x400", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "nvme_pcie": { 00:38:52.460 "mask": "0x800", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "iaa": { 00:38:52.460 "mask": "0x1000", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "nvme_tcp": { 00:38:52.460 "mask": "0x2000", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "bdev_nvme": { 00:38:52.460 "mask": "0x4000", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "sock": { 00:38:52.460 "mask": "0x8000", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "blob": { 00:38:52.460 "mask": "0x10000", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "bdev_raid": { 00:38:52.460 "mask": "0x20000", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 }, 00:38:52.460 "scheduler": { 00:38:52.460 "mask": "0x40000", 00:38:52.460 "tpoint_mask": "0x0" 00:38:52.460 } 00:38:52.460 }' 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:38:52.460 09:50:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:38:52.720 09:50:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:38:52.720 09:51:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:38:52.720 09:51:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:38:52.720 09:51:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:38:52.720 09:51:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:38:52.720 09:51:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:38:52.720 09:51:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:38:52.720 00:38:52.720 real 0m0.278s 00:38:52.720 user 0m0.234s 00:38:52.720 sys 0m0.031s 00:38:52.720 09:51:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.720 09:51:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:38:52.720 ************************************ 00:38:52.720 END TEST rpc_trace_cmd_test 00:38:52.720 ************************************ 00:38:52.720 09:51:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:38:52.720 09:51:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:38:52.720 09:51:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:38:52.720 09:51:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:52.720 09:51:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.720 09:51:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:38:52.720 ************************************ 00:38:52.720 START TEST rpc_daemon_integrity 00:38:52.720 ************************************ 00:38:52.720 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:38:52.720 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:38:52.720 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.720 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.720 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.720 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:38:52.720 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:38:52.982 { 00:38:52.982 "name": "Malloc2", 00:38:52.982 "aliases": [ 00:38:52.982 "312ffc1e-3e69-47d7-8a96-487472c00c03" 00:38:52.982 ], 00:38:52.982 "product_name": "Malloc disk", 00:38:52.982 "block_size": 512, 00:38:52.982 "num_blocks": 16384, 00:38:52.982 "uuid": "312ffc1e-3e69-47d7-8a96-487472c00c03", 00:38:52.982 "assigned_rate_limits": { 00:38:52.982 "rw_ios_per_sec": 0, 00:38:52.982 "rw_mbytes_per_sec": 0, 00:38:52.982 "r_mbytes_per_sec": 0, 00:38:52.982 "w_mbytes_per_sec": 0 00:38:52.982 }, 00:38:52.982 "claimed": false, 00:38:52.982 "zoned": false, 00:38:52.982 "supported_io_types": { 00:38:52.982 "read": true, 00:38:52.982 "write": true, 00:38:52.982 "unmap": true, 00:38:52.982 "flush": true, 00:38:52.982 "reset": true, 00:38:52.982 "nvme_admin": false, 00:38:52.982 "nvme_io": false, 00:38:52.982 "nvme_io_md": false, 00:38:52.982 "write_zeroes": true, 00:38:52.982 "zcopy": true, 00:38:52.982 "get_zone_info": false, 00:38:52.982 "zone_management": false, 00:38:52.982 "zone_append": false, 00:38:52.982 "compare": false, 00:38:52.982 "compare_and_write": false, 00:38:52.982 "abort": true, 00:38:52.982 "seek_hole": false, 00:38:52.982 "seek_data": false, 00:38:52.982 "copy": true, 00:38:52.982 "nvme_iov_md": false 00:38:52.982 }, 00:38:52.982 "memory_domains": [ 00:38:52.982 { 00:38:52.982 "dma_device_id": "system", 00:38:52.982 "dma_device_type": 1 00:38:52.982 }, 00:38:52.982 { 00:38:52.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.982 "dma_device_type": 2 00:38:52.982 } 00:38:52.982 ], 00:38:52.982 "driver_specific": {} 00:38:52.982 } 00:38:52.982 ]' 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.982 [2024-12-09 09:51:00.352527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:38:52.982 [2024-12-09 09:51:00.352609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:52.982 [2024-12-09 09:51:00.352639] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23a8270 00:38:52.982 [2024-12-09 09:51:00.352649] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:52.982 [2024-12-09 09:51:00.354161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:52.982 [2024-12-09 09:51:00.354211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:38:52.982 Passthru0 00:38:52.982 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:38:52.983 { 00:38:52.983 "name": "Malloc2", 00:38:52.983 "aliases": [ 00:38:52.983 "312ffc1e-3e69-47d7-8a96-487472c00c03" 00:38:52.983 ], 00:38:52.983 "product_name": "Malloc disk", 00:38:52.983 "block_size": 512, 00:38:52.983 "num_blocks": 16384, 00:38:52.983 "uuid": "312ffc1e-3e69-47d7-8a96-487472c00c03", 00:38:52.983 "assigned_rate_limits": { 00:38:52.983 "rw_ios_per_sec": 0, 00:38:52.983 "rw_mbytes_per_sec": 0, 00:38:52.983 "r_mbytes_per_sec": 0, 00:38:52.983 "w_mbytes_per_sec": 0 00:38:52.983 }, 00:38:52.983 "claimed": true, 00:38:52.983 "claim_type": "exclusive_write", 00:38:52.983 "zoned": false, 00:38:52.983 "supported_io_types": { 00:38:52.983 "read": true, 00:38:52.983 "write": true, 00:38:52.983 "unmap": true, 00:38:52.983 "flush": true, 00:38:52.983 "reset": true, 00:38:52.983 "nvme_admin": false, 00:38:52.983 "nvme_io": false, 00:38:52.983 "nvme_io_md": false, 00:38:52.983 "write_zeroes": true, 00:38:52.983 "zcopy": true, 00:38:52.983 "get_zone_info": false, 00:38:52.983 "zone_management": false, 00:38:52.983 "zone_append": false, 00:38:52.983 "compare": false, 00:38:52.983 "compare_and_write": false, 00:38:52.983 "abort": true, 00:38:52.983 "seek_hole": false, 00:38:52.983 "seek_data": false, 00:38:52.983 "copy": true, 00:38:52.983 "nvme_iov_md": false 00:38:52.983 }, 00:38:52.983 "memory_domains": [ 00:38:52.983 { 00:38:52.983 "dma_device_id": "system", 00:38:52.983 "dma_device_type": 1 00:38:52.983 }, 00:38:52.983 { 00:38:52.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.983 "dma_device_type": 2 00:38:52.983 } 00:38:52.983 ], 00:38:52.983 "driver_specific": {} 00:38:52.983 }, 00:38:52.983 { 00:38:52.983 "name": "Passthru0", 00:38:52.983 "aliases": [ 00:38:52.983 "d4ad902f-55d5-5184-ab3b-135f8d2f021d" 00:38:52.983 ], 00:38:52.983 "product_name": "passthru", 00:38:52.983 "block_size": 512, 00:38:52.983 "num_blocks": 16384, 00:38:52.983 "uuid": "d4ad902f-55d5-5184-ab3b-135f8d2f021d", 00:38:52.983 "assigned_rate_limits": { 00:38:52.983 "rw_ios_per_sec": 0, 00:38:52.983 "rw_mbytes_per_sec": 0, 00:38:52.983 "r_mbytes_per_sec": 0, 00:38:52.983 "w_mbytes_per_sec": 0 00:38:52.983 }, 00:38:52.983 "claimed": false, 00:38:52.983 "zoned": false, 00:38:52.983 "supported_io_types": { 00:38:52.983 "read": true, 00:38:52.983 "write": true, 00:38:52.983 "unmap": true, 00:38:52.983 "flush": true, 00:38:52.983 "reset": true, 00:38:52.983 "nvme_admin": false, 00:38:52.983 "nvme_io": false, 00:38:52.983 "nvme_io_md": false, 00:38:52.983 "write_zeroes": true, 00:38:52.983 "zcopy": true, 00:38:52.983 "get_zone_info": false, 00:38:52.983 "zone_management": false, 00:38:52.983 "zone_append": false, 00:38:52.983 "compare": false, 00:38:52.983 "compare_and_write": false, 00:38:52.983 "abort": true, 00:38:52.983 "seek_hole": false, 00:38:52.983 "seek_data": false, 00:38:52.983 "copy": true, 00:38:52.983 "nvme_iov_md": false 00:38:52.983 }, 00:38:52.983 "memory_domains": [ 00:38:52.983 { 00:38:52.983 "dma_device_id": "system", 00:38:52.983 "dma_device_type": 1 00:38:52.983 }, 00:38:52.983 { 00:38:52.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:52.983 "dma_device_type": 2 00:38:52.983 } 00:38:52.983 ], 00:38:52.983 "driver_specific": { 00:38:52.983 "passthru": { 00:38:52.983 "name": "Passthru0", 00:38:52.983 "base_bdev_name": "Malloc2" 00:38:52.983 } 00:38:52.983 } 00:38:52.983 } 00:38:52.983 ]' 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:38:52.983 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:38:53.244 09:51:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:38:53.244 00:38:53.244 real 0m0.323s 00:38:53.244 user 0m0.210s 00:38:53.244 sys 0m0.047s 00:38:53.244 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.244 09:51:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:38:53.244 ************************************ 00:38:53.244 END TEST rpc_daemon_integrity 00:38:53.244 ************************************ 00:38:53.244 09:51:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:53.244 09:51:00 rpc -- rpc/rpc.sh@84 -- # killprocess 56767 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@954 -- # '[' -z 56767 ']' 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@958 -- # kill -0 56767 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@959 -- # uname 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56767 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:53.244 killing process with pid 56767 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56767' 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@973 -- # kill 56767 00:38:53.244 09:51:00 rpc -- common/autotest_common.sh@978 -- # wait 56767 00:38:53.504 00:38:53.504 real 0m2.229s 00:38:53.504 user 0m2.985s 00:38:53.504 sys 0m0.582s 00:38:53.504 09:51:00 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.504 09:51:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:38:53.504 ************************************ 00:38:53.504 END TEST rpc 00:38:53.504 ************************************ 00:38:53.504 09:51:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:38:53.504 09:51:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:53.504 09:51:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:53.504 09:51:00 -- common/autotest_common.sh@10 -- # set +x 00:38:53.504 ************************************ 00:38:53.504 START TEST skip_rpc 00:38:53.504 ************************************ 00:38:53.504 09:51:00 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:38:53.763 * Looking for test storage... 00:38:53.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:53.763 09:51:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:53.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.763 --rc genhtml_branch_coverage=1 00:38:53.763 --rc genhtml_function_coverage=1 00:38:53.763 --rc genhtml_legend=1 00:38:53.763 --rc geninfo_all_blocks=1 00:38:53.763 --rc geninfo_unexecuted_blocks=1 00:38:53.763 00:38:53.763 ' 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:53.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.763 --rc genhtml_branch_coverage=1 00:38:53.763 --rc genhtml_function_coverage=1 00:38:53.763 --rc genhtml_legend=1 00:38:53.763 --rc geninfo_all_blocks=1 00:38:53.763 --rc geninfo_unexecuted_blocks=1 00:38:53.763 00:38:53.763 ' 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:53.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.763 --rc genhtml_branch_coverage=1 00:38:53.763 --rc genhtml_function_coverage=1 00:38:53.763 --rc genhtml_legend=1 00:38:53.763 --rc geninfo_all_blocks=1 00:38:53.763 --rc geninfo_unexecuted_blocks=1 00:38:53.763 00:38:53.763 ' 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:53.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.763 --rc genhtml_branch_coverage=1 00:38:53.763 --rc genhtml_function_coverage=1 00:38:53.763 --rc genhtml_legend=1 00:38:53.763 --rc geninfo_all_blocks=1 00:38:53.763 --rc geninfo_unexecuted_blocks=1 00:38:53.763 00:38:53.763 ' 00:38:53.763 09:51:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:38:53.763 09:51:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:38:53.763 09:51:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:53.763 09:51:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:53.763 ************************************ 00:38:53.763 START TEST skip_rpc 00:38:53.763 ************************************ 00:38:53.763 09:51:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:38:53.763 09:51:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:38:53.763 09:51:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56966 00:38:53.763 09:51:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:38:53.763 09:51:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:38:53.763 [2024-12-09 09:51:01.209155] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:38:53.763 [2024-12-09 09:51:01.209257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56966 ] 00:38:54.023 [2024-12-09 09:51:01.357978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.023 [2024-12-09 09:51:01.390806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.023 [2024-12-09 09:51:01.430812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56966 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56966 ']' 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56966 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56966 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:59.295 killing process with pid 56966 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56966' 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56966 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56966 00:38:59.295 00:38:59.295 real 0m5.329s 00:38:59.295 user 0m5.050s 00:38:59.295 sys 0m0.180s 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:59.295 ************************************ 00:38:59.295 END TEST skip_rpc 00:38:59.295 ************************************ 00:38:59.295 09:51:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:59.295 09:51:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:38:59.295 09:51:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:59.295 09:51:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.295 09:51:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:59.295 ************************************ 00:38:59.295 START TEST skip_rpc_with_json 00:38:59.295 ************************************ 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57047 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57047 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57047 ']' 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:59.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:59.295 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:38:59.295 [2024-12-09 09:51:06.601636] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:38:59.295 [2024-12-09 09:51:06.601739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57047 ] 00:38:59.295 [2024-12-09 09:51:06.758627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.555 [2024-12-09 09:51:06.798764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.555 [2024-12-09 09:51:06.844525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:59.555 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.555 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:38:59.555 09:51:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:38:59.555 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.555 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:38:59.555 [2024-12-09 09:51:06.991432] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:38:59.555 request: 00:38:59.555 { 00:38:59.555 "trtype": "tcp", 00:38:59.555 "method": "nvmf_get_transports", 00:38:59.555 "req_id": 1 00:38:59.555 } 00:38:59.555 Got JSON-RPC error response 00:38:59.555 response: 00:38:59.555 { 00:38:59.555 "code": -19, 00:38:59.555 "message": "No such device" 00:38:59.555 } 00:38:59.555 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:59.555 09:51:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:38:59.555 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.555 09:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:38:59.555 [2024-12-09 09:51:07.003616] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:59.555 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.555 09:51:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:38:59.555 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.555 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:38:59.814 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.814 09:51:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:38:59.814 { 00:38:59.814 "subsystems": [ 00:38:59.814 { 00:38:59.814 "subsystem": "fsdev", 00:38:59.814 "config": [ 00:38:59.814 { 00:38:59.814 "method": "fsdev_set_opts", 00:38:59.814 "params": { 00:38:59.814 "fsdev_io_pool_size": 65535, 00:38:59.814 "fsdev_io_cache_size": 256 00:38:59.814 } 00:38:59.814 } 00:38:59.814 ] 00:38:59.814 }, 00:38:59.814 { 00:38:59.814 "subsystem": "keyring", 00:38:59.814 "config": [] 00:38:59.814 }, 00:38:59.814 { 00:38:59.814 "subsystem": "iobuf", 00:38:59.814 "config": [ 00:38:59.814 { 00:38:59.814 "method": "iobuf_set_options", 00:38:59.814 "params": { 00:38:59.814 "small_pool_count": 8192, 00:38:59.814 "large_pool_count": 1024, 00:38:59.814 "small_bufsize": 8192, 00:38:59.814 "large_bufsize": 135168, 00:38:59.814 "enable_numa": false 00:38:59.814 } 00:38:59.814 } 00:38:59.814 ] 00:38:59.814 }, 00:38:59.814 { 00:38:59.814 "subsystem": "sock", 00:38:59.814 "config": [ 00:38:59.814 { 00:38:59.814 "method": "sock_set_default_impl", 00:38:59.814 "params": { 00:38:59.814 "impl_name": "uring" 00:38:59.814 } 00:38:59.814 }, 00:38:59.814 { 00:38:59.814 "method": "sock_impl_set_options", 00:38:59.814 "params": { 00:38:59.814 "impl_name": "ssl", 00:38:59.814 "recv_buf_size": 4096, 00:38:59.814 "send_buf_size": 4096, 00:38:59.814 "enable_recv_pipe": true, 00:38:59.814 "enable_quickack": false, 00:38:59.814 "enable_placement_id": 0, 00:38:59.814 "enable_zerocopy_send_server": true, 00:38:59.814 "enable_zerocopy_send_client": false, 00:38:59.814 "zerocopy_threshold": 0, 00:38:59.814 "tls_version": 0, 00:38:59.814 "enable_ktls": false 00:38:59.814 } 00:38:59.814 }, 00:38:59.814 { 00:38:59.814 "method": "sock_impl_set_options", 00:38:59.814 "params": { 00:38:59.815 "impl_name": "posix", 00:38:59.815 "recv_buf_size": 2097152, 00:38:59.815 "send_buf_size": 2097152, 00:38:59.815 "enable_recv_pipe": true, 00:38:59.815 "enable_quickack": false, 00:38:59.815 "enable_placement_id": 0, 00:38:59.815 "enable_zerocopy_send_server": true, 00:38:59.815 "enable_zerocopy_send_client": false, 00:38:59.815 "zerocopy_threshold": 0, 00:38:59.815 "tls_version": 0, 00:38:59.815 "enable_ktls": false 00:38:59.815 } 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "method": "sock_impl_set_options", 00:38:59.815 "params": { 00:38:59.815 "impl_name": "uring", 00:38:59.815 "recv_buf_size": 2097152, 00:38:59.815 "send_buf_size": 2097152, 00:38:59.815 "enable_recv_pipe": true, 00:38:59.815 "enable_quickack": false, 00:38:59.815 "enable_placement_id": 0, 00:38:59.815 "enable_zerocopy_send_server": false, 00:38:59.815 "enable_zerocopy_send_client": false, 00:38:59.815 "zerocopy_threshold": 0, 00:38:59.815 "tls_version": 0, 00:38:59.815 "enable_ktls": false 00:38:59.815 } 00:38:59.815 } 00:38:59.815 ] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "vmd", 00:38:59.815 "config": [] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "accel", 00:38:59.815 "config": [ 00:38:59.815 { 00:38:59.815 "method": "accel_set_options", 00:38:59.815 "params": { 00:38:59.815 "small_cache_size": 128, 00:38:59.815 "large_cache_size": 16, 00:38:59.815 "task_count": 2048, 00:38:59.815 "sequence_count": 2048, 00:38:59.815 "buf_count": 2048 00:38:59.815 } 00:38:59.815 } 00:38:59.815 ] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "bdev", 00:38:59.815 "config": [ 00:38:59.815 { 00:38:59.815 "method": "bdev_set_options", 00:38:59.815 "params": { 00:38:59.815 "bdev_io_pool_size": 65535, 00:38:59.815 "bdev_io_cache_size": 256, 00:38:59.815 "bdev_auto_examine": true, 00:38:59.815 "iobuf_small_cache_size": 128, 00:38:59.815 "iobuf_large_cache_size": 16 00:38:59.815 } 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "method": "bdev_raid_set_options", 00:38:59.815 "params": { 00:38:59.815 "process_window_size_kb": 1024, 00:38:59.815 "process_max_bandwidth_mb_sec": 0 00:38:59.815 } 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "method": "bdev_iscsi_set_options", 00:38:59.815 "params": { 00:38:59.815 "timeout_sec": 30 00:38:59.815 } 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "method": "bdev_nvme_set_options", 00:38:59.815 "params": { 00:38:59.815 "action_on_timeout": "none", 00:38:59.815 "timeout_us": 0, 00:38:59.815 "timeout_admin_us": 0, 00:38:59.815 "keep_alive_timeout_ms": 10000, 00:38:59.815 "arbitration_burst": 0, 00:38:59.815 "low_priority_weight": 0, 00:38:59.815 "medium_priority_weight": 0, 00:38:59.815 "high_priority_weight": 0, 00:38:59.815 "nvme_adminq_poll_period_us": 10000, 00:38:59.815 "nvme_ioq_poll_period_us": 0, 00:38:59.815 "io_queue_requests": 0, 00:38:59.815 "delay_cmd_submit": true, 00:38:59.815 "transport_retry_count": 4, 00:38:59.815 "bdev_retry_count": 3, 00:38:59.815 "transport_ack_timeout": 0, 00:38:59.815 "ctrlr_loss_timeout_sec": 0, 00:38:59.815 "reconnect_delay_sec": 0, 00:38:59.815 "fast_io_fail_timeout_sec": 0, 00:38:59.815 "disable_auto_failback": false, 00:38:59.815 "generate_uuids": false, 00:38:59.815 "transport_tos": 0, 00:38:59.815 "nvme_error_stat": false, 00:38:59.815 "rdma_srq_size": 0, 00:38:59.815 "io_path_stat": false, 00:38:59.815 "allow_accel_sequence": false, 00:38:59.815 "rdma_max_cq_size": 0, 00:38:59.815 "rdma_cm_event_timeout_ms": 0, 00:38:59.815 "dhchap_digests": [ 00:38:59.815 "sha256", 00:38:59.815 "sha384", 00:38:59.815 "sha512" 00:38:59.815 ], 00:38:59.815 "dhchap_dhgroups": [ 00:38:59.815 "null", 00:38:59.815 "ffdhe2048", 00:38:59.815 "ffdhe3072", 00:38:59.815 "ffdhe4096", 00:38:59.815 "ffdhe6144", 00:38:59.815 "ffdhe8192" 00:38:59.815 ] 00:38:59.815 } 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "method": "bdev_nvme_set_hotplug", 00:38:59.815 "params": { 00:38:59.815 "period_us": 100000, 00:38:59.815 "enable": false 00:38:59.815 } 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "method": "bdev_wait_for_examine" 00:38:59.815 } 00:38:59.815 ] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "scsi", 00:38:59.815 "config": null 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "scheduler", 00:38:59.815 "config": [ 00:38:59.815 { 00:38:59.815 "method": "framework_set_scheduler", 00:38:59.815 "params": { 00:38:59.815 "name": "static" 00:38:59.815 } 00:38:59.815 } 00:38:59.815 ] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "vhost_scsi", 00:38:59.815 "config": [] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "vhost_blk", 00:38:59.815 "config": [] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "ublk", 00:38:59.815 "config": [] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "nbd", 00:38:59.815 "config": [] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "nvmf", 00:38:59.815 "config": [ 00:38:59.815 { 00:38:59.815 "method": "nvmf_set_config", 00:38:59.815 "params": { 00:38:59.815 "discovery_filter": "match_any", 00:38:59.815 "admin_cmd_passthru": { 00:38:59.815 "identify_ctrlr": false 00:38:59.815 }, 00:38:59.815 "dhchap_digests": [ 00:38:59.815 "sha256", 00:38:59.815 "sha384", 00:38:59.815 "sha512" 00:38:59.815 ], 00:38:59.815 "dhchap_dhgroups": [ 00:38:59.815 "null", 00:38:59.815 "ffdhe2048", 00:38:59.815 "ffdhe3072", 00:38:59.815 "ffdhe4096", 00:38:59.815 "ffdhe6144", 00:38:59.815 "ffdhe8192" 00:38:59.815 ] 00:38:59.815 } 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "method": "nvmf_set_max_subsystems", 00:38:59.815 "params": { 00:38:59.815 "max_subsystems": 1024 00:38:59.815 } 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "method": "nvmf_set_crdt", 00:38:59.815 "params": { 00:38:59.815 "crdt1": 0, 00:38:59.815 "crdt2": 0, 00:38:59.815 "crdt3": 0 00:38:59.815 } 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "method": "nvmf_create_transport", 00:38:59.815 "params": { 00:38:59.815 "trtype": "TCP", 00:38:59.815 "max_queue_depth": 128, 00:38:59.815 "max_io_qpairs_per_ctrlr": 127, 00:38:59.815 "in_capsule_data_size": 4096, 00:38:59.815 "max_io_size": 131072, 00:38:59.815 "io_unit_size": 131072, 00:38:59.815 "max_aq_depth": 128, 00:38:59.815 "num_shared_buffers": 511, 00:38:59.815 "buf_cache_size": 4294967295, 00:38:59.815 "dif_insert_or_strip": false, 00:38:59.815 "zcopy": false, 00:38:59.815 "c2h_success": true, 00:38:59.815 "sock_priority": 0, 00:38:59.815 "abort_timeout_sec": 1, 00:38:59.815 "ack_timeout": 0, 00:38:59.815 "data_wr_pool_size": 0 00:38:59.815 } 00:38:59.815 } 00:38:59.815 ] 00:38:59.815 }, 00:38:59.815 { 00:38:59.815 "subsystem": "iscsi", 00:38:59.815 "config": [ 00:38:59.815 { 00:38:59.815 "method": "iscsi_set_options", 00:38:59.815 "params": { 00:38:59.815 "node_base": "iqn.2016-06.io.spdk", 00:38:59.815 "max_sessions": 128, 00:38:59.815 "max_connections_per_session": 2, 00:38:59.815 "max_queue_depth": 64, 00:38:59.815 "default_time2wait": 2, 00:38:59.815 "default_time2retain": 20, 00:38:59.815 "first_burst_length": 8192, 00:38:59.815 "immediate_data": true, 00:38:59.815 "allow_duplicated_isid": false, 00:38:59.815 "error_recovery_level": 0, 00:38:59.815 "nop_timeout": 60, 00:38:59.815 "nop_in_interval": 30, 00:38:59.815 "disable_chap": false, 00:38:59.815 "require_chap": false, 00:38:59.815 "mutual_chap": false, 00:38:59.815 "chap_group": 0, 00:38:59.815 "max_large_datain_per_connection": 64, 00:38:59.815 "max_r2t_per_connection": 4, 00:38:59.815 "pdu_pool_size": 36864, 00:38:59.815 "immediate_data_pool_size": 16384, 00:38:59.815 "data_out_pool_size": 2048 00:38:59.815 } 00:38:59.815 } 00:38:59.815 ] 00:38:59.815 } 00:38:59.815 ] 00:38:59.815 } 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57047 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57047 ']' 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57047 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57047 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:59.815 killing process with pid 57047 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57047' 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57047 00:38:59.815 09:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57047 00:39:00.074 09:51:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57067 00:39:00.075 09:51:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:39:00.075 09:51:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57067 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57067 ']' 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57067 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57067 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:05.392 killing process with pid 57067 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57067' 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57067 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57067 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:39:05.392 00:39:05.392 real 0m6.320s 00:39:05.392 user 0m6.072s 00:39:05.392 sys 0m0.456s 00:39:05.392 ************************************ 00:39:05.392 END TEST skip_rpc_with_json 00:39:05.392 ************************************ 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:39:05.392 09:51:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:39:05.392 09:51:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:05.392 09:51:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:05.392 09:51:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:05.392 ************************************ 00:39:05.392 START TEST skip_rpc_with_delay 00:39:05.392 ************************************ 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:39:05.392 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:39:05.651 [2024-12-09 09:51:12.965005] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:05.651 00:39:05.651 real 0m0.095s 00:39:05.651 user 0m0.060s 00:39:05.651 sys 0m0.034s 00:39:05.651 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:05.651 ************************************ 00:39:05.652 END TEST skip_rpc_with_delay 00:39:05.652 ************************************ 00:39:05.652 09:51:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:39:05.652 09:51:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:39:05.652 09:51:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:39:05.652 09:51:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:39:05.652 09:51:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:05.652 09:51:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:05.652 09:51:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:05.652 ************************************ 00:39:05.652 START TEST exit_on_failed_rpc_init 00:39:05.652 ************************************ 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57176 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57176 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57176 ']' 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:05.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:05.652 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:39:05.652 [2024-12-09 09:51:13.114692] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:05.652 [2024-12-09 09:51:13.114798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57176 ] 00:39:05.911 [2024-12-09 09:51:13.267484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.911 [2024-12-09 09:51:13.301398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.911 [2024-12-09 09:51:13.340235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:39:06.172 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:39:06.172 [2024-12-09 09:51:13.548049] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:06.172 [2024-12-09 09:51:13.548673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57187 ] 00:39:06.431 [2024-12-09 09:51:13.707220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.431 [2024-12-09 09:51:13.747474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:06.431 [2024-12-09 09:51:13.747598] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:39:06.431 [2024-12-09 09:51:13.747625] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:39:06.431 [2024-12-09 09:51:13.747643] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57176 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57176 ']' 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57176 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57176 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:06.431 killing process with pid 57176 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57176' 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57176 00:39:06.431 09:51:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57176 00:39:07.001 00:39:07.001 real 0m1.157s 00:39:07.001 user 0m1.418s 00:39:07.001 sys 0m0.281s 00:39:07.001 ************************************ 00:39:07.001 END TEST exit_on_failed_rpc_init 00:39:07.001 ************************************ 00:39:07.001 09:51:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:07.001 09:51:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:39:07.001 09:51:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:39:07.001 00:39:07.001 real 0m13.315s 00:39:07.001 user 0m12.781s 00:39:07.001 sys 0m1.169s 00:39:07.001 09:51:14 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:07.001 ************************************ 00:39:07.001 09:51:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:07.001 END TEST skip_rpc 00:39:07.001 ************************************ 00:39:07.001 09:51:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:39:07.001 09:51:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:07.001 09:51:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:07.001 09:51:14 -- common/autotest_common.sh@10 -- # set +x 00:39:07.001 ************************************ 00:39:07.001 START TEST rpc_client 00:39:07.001 ************************************ 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:39:07.001 * Looking for test storage... 00:39:07.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:07.001 09:51:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:07.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.001 --rc genhtml_branch_coverage=1 00:39:07.001 --rc genhtml_function_coverage=1 00:39:07.001 --rc genhtml_legend=1 00:39:07.001 --rc geninfo_all_blocks=1 00:39:07.001 --rc geninfo_unexecuted_blocks=1 00:39:07.001 00:39:07.001 ' 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:07.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.001 --rc genhtml_branch_coverage=1 00:39:07.001 --rc genhtml_function_coverage=1 00:39:07.001 --rc genhtml_legend=1 00:39:07.001 --rc geninfo_all_blocks=1 00:39:07.001 --rc geninfo_unexecuted_blocks=1 00:39:07.001 00:39:07.001 ' 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:07.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.001 --rc genhtml_branch_coverage=1 00:39:07.001 --rc genhtml_function_coverage=1 00:39:07.001 --rc genhtml_legend=1 00:39:07.001 --rc geninfo_all_blocks=1 00:39:07.001 --rc geninfo_unexecuted_blocks=1 00:39:07.001 00:39:07.001 ' 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:07.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.001 --rc genhtml_branch_coverage=1 00:39:07.001 --rc genhtml_function_coverage=1 00:39:07.001 --rc genhtml_legend=1 00:39:07.001 --rc geninfo_all_blocks=1 00:39:07.001 --rc geninfo_unexecuted_blocks=1 00:39:07.001 00:39:07.001 ' 00:39:07.001 09:51:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:39:07.001 OK 00:39:07.001 09:51:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:39:07.001 00:39:07.001 real 0m0.198s 00:39:07.001 user 0m0.125s 00:39:07.001 sys 0m0.084s 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:07.001 09:51:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:39:07.001 ************************************ 00:39:07.001 END TEST rpc_client 00:39:07.001 ************************************ 00:39:07.261 09:51:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:39:07.261 09:51:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:07.261 09:51:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:07.261 09:51:14 -- common/autotest_common.sh@10 -- # set +x 00:39:07.261 ************************************ 00:39:07.261 START TEST json_config 00:39:07.261 ************************************ 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:07.261 09:51:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:07.261 09:51:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:07.261 09:51:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:07.261 09:51:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:39:07.261 09:51:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:39:07.261 09:51:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:39:07.261 09:51:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:39:07.261 09:51:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:39:07.261 09:51:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:39:07.261 09:51:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:39:07.261 09:51:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:07.261 09:51:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:39:07.261 09:51:14 json_config -- scripts/common.sh@345 -- # : 1 00:39:07.261 09:51:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:07.261 09:51:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:07.261 09:51:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:39:07.261 09:51:14 json_config -- scripts/common.sh@353 -- # local d=1 00:39:07.261 09:51:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:07.261 09:51:14 json_config -- scripts/common.sh@355 -- # echo 1 00:39:07.261 09:51:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:39:07.261 09:51:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:39:07.261 09:51:14 json_config -- scripts/common.sh@353 -- # local d=2 00:39:07.261 09:51:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:07.261 09:51:14 json_config -- scripts/common.sh@355 -- # echo 2 00:39:07.261 09:51:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:39:07.261 09:51:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:07.261 09:51:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:07.261 09:51:14 json_config -- scripts/common.sh@368 -- # return 0 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:07.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.261 --rc genhtml_branch_coverage=1 00:39:07.261 --rc genhtml_function_coverage=1 00:39:07.261 --rc genhtml_legend=1 00:39:07.261 --rc geninfo_all_blocks=1 00:39:07.261 --rc geninfo_unexecuted_blocks=1 00:39:07.261 00:39:07.261 ' 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:07.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.261 --rc genhtml_branch_coverage=1 00:39:07.261 --rc genhtml_function_coverage=1 00:39:07.261 --rc genhtml_legend=1 00:39:07.261 --rc geninfo_all_blocks=1 00:39:07.261 --rc geninfo_unexecuted_blocks=1 00:39:07.261 00:39:07.261 ' 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:07.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.261 --rc genhtml_branch_coverage=1 00:39:07.261 --rc genhtml_function_coverage=1 00:39:07.261 --rc genhtml_legend=1 00:39:07.261 --rc geninfo_all_blocks=1 00:39:07.261 --rc geninfo_unexecuted_blocks=1 00:39:07.261 00:39:07.261 ' 00:39:07.261 09:51:14 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:07.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.261 --rc genhtml_branch_coverage=1 00:39:07.261 --rc genhtml_function_coverage=1 00:39:07.261 --rc genhtml_legend=1 00:39:07.261 --rc geninfo_all_blocks=1 00:39:07.261 --rc geninfo_unexecuted_blocks=1 00:39:07.261 00:39:07.261 ' 00:39:07.261 09:51:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:07.261 09:51:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:07.261 09:51:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:39:07.261 09:51:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:07.261 09:51:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:07.261 09:51:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:07.261 09:51:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.262 09:51:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.262 09:51:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.262 09:51:14 json_config -- paths/export.sh@5 -- # export PATH 00:39:07.262 09:51:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@51 -- # : 0 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:07.262 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:07.262 09:51:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:39:07.262 INFO: JSON configuration test init 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:07.262 09:51:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:39:07.262 09:51:14 json_config -- json_config/common.sh@9 -- # local app=target 00:39:07.262 09:51:14 json_config -- json_config/common.sh@10 -- # shift 00:39:07.262 09:51:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:39:07.262 09:51:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:39:07.262 09:51:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:39:07.262 09:51:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:39:07.262 09:51:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:39:07.262 09:51:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57321 00:39:07.262 Waiting for target to run... 00:39:07.262 09:51:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:39:07.262 09:51:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:39:07.262 09:51:14 json_config -- json_config/common.sh@25 -- # waitforlisten 57321 /var/tmp/spdk_tgt.sock 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@835 -- # '[' -z 57321 ']' 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:07.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:07.262 09:51:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:07.520 [2024-12-09 09:51:14.815659] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:07.520 [2024-12-09 09:51:14.815768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57321 ] 00:39:07.777 [2024-12-09 09:51:15.148784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.777 [2024-12-09 09:51:15.188502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.712 09:51:15 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.712 09:51:15 json_config -- common/autotest_common.sh@868 -- # return 0 00:39:08.712 00:39:08.712 09:51:15 json_config -- json_config/common.sh@26 -- # echo '' 00:39:08.712 09:51:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:39:08.712 09:51:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:39:08.712 09:51:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:08.712 09:51:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:08.712 09:51:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:39:08.712 09:51:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:39:08.712 09:51:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.712 09:51:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:08.712 09:51:15 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:39:08.712 09:51:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:39:08.712 09:51:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:39:08.971 [2024-12-09 09:51:16.258515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:08.971 09:51:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:39:08.971 09:51:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:39:08.971 09:51:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:08.971 09:51:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:08.971 09:51:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:39:08.971 09:51:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:39:08.971 09:51:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:39:08.971 09:51:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:39:08.971 09:51:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:39:08.971 09:51:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:39:08.972 09:51:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:39:08.972 09:51:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@54 -- # sort 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:39:09.540 09:51:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:09.540 09:51:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:39:09.540 09:51:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:09.540 09:51:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:39:09.540 09:51:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:39:09.540 09:51:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:39:09.799 MallocForNvmf0 00:39:09.799 09:51:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:39:09.799 09:51:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:39:10.057 MallocForNvmf1 00:39:10.057 09:51:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:39:10.057 09:51:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:39:10.317 [2024-12-09 09:51:17.724498] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:10.317 09:51:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:10.317 09:51:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:10.577 09:51:18 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:39:10.577 09:51:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:39:10.836 09:51:18 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:39:10.836 09:51:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:39:11.095 09:51:18 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:39:11.095 09:51:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:39:11.353 [2024-12-09 09:51:18.745098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:11.353 09:51:18 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:39:11.353 09:51:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:11.353 09:51:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:11.353 09:51:18 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:39:11.353 09:51:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:11.353 09:51:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:11.353 09:51:18 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:39:11.354 09:51:18 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:39:11.354 09:51:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:39:11.921 MallocBdevForConfigChangeCheck 00:39:11.921 09:51:19 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:39:11.921 09:51:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:11.921 09:51:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:11.921 09:51:19 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:39:11.921 09:51:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:39:12.180 INFO: shutting down applications... 00:39:12.180 09:51:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:39:12.180 09:51:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:39:12.180 09:51:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:39:12.180 09:51:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:39:12.180 09:51:19 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:39:12.439 Calling clear_iscsi_subsystem 00:39:12.439 Calling clear_nvmf_subsystem 00:39:12.439 Calling clear_nbd_subsystem 00:39:12.439 Calling clear_ublk_subsystem 00:39:12.439 Calling clear_vhost_blk_subsystem 00:39:12.439 Calling clear_vhost_scsi_subsystem 00:39:12.439 Calling clear_bdev_subsystem 00:39:12.439 09:51:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:39:12.439 09:51:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:39:12.439 09:51:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:39:12.439 09:51:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:39:12.439 09:51:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:39:12.439 09:51:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:39:13.006 09:51:20 json_config -- json_config/json_config.sh@352 -- # break 00:39:13.006 09:51:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:39:13.006 09:51:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:39:13.006 09:51:20 json_config -- json_config/common.sh@31 -- # local app=target 00:39:13.006 09:51:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:39:13.006 09:51:20 json_config -- json_config/common.sh@35 -- # [[ -n 57321 ]] 00:39:13.006 09:51:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57321 00:39:13.006 09:51:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:39:13.006 09:51:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:39:13.006 09:51:20 json_config -- json_config/common.sh@41 -- # kill -0 57321 00:39:13.006 09:51:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:39:13.574 09:51:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:39:13.574 09:51:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:39:13.574 09:51:20 json_config -- json_config/common.sh@41 -- # kill -0 57321 00:39:13.574 09:51:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:39:13.574 09:51:20 json_config -- json_config/common.sh@43 -- # break 00:39:13.574 SPDK target shutdown done 00:39:13.574 09:51:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:39:13.574 09:51:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:39:13.574 INFO: relaunching applications... 00:39:13.574 09:51:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:39:13.574 09:51:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:39:13.574 09:51:20 json_config -- json_config/common.sh@9 -- # local app=target 00:39:13.574 09:51:20 json_config -- json_config/common.sh@10 -- # shift 00:39:13.574 09:51:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:39:13.574 09:51:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:39:13.574 09:51:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:39:13.574 09:51:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:39:13.574 09:51:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:39:13.574 09:51:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57522 00:39:13.574 Waiting for target to run... 00:39:13.574 09:51:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:39:13.574 09:51:20 json_config -- json_config/common.sh@25 -- # waitforlisten 57522 /var/tmp/spdk_tgt.sock 00:39:13.574 09:51:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:39:13.574 09:51:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 57522 ']' 00:39:13.574 09:51:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:39:13.574 09:51:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:13.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:39:13.574 09:51:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:39:13.574 09:51:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:13.574 09:51:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:13.574 [2024-12-09 09:51:20.930379] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:13.574 [2024-12-09 09:51:20.930466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57522 ] 00:39:13.833 [2024-12-09 09:51:21.231465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.833 [2024-12-09 09:51:21.265871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.092 [2024-12-09 09:51:21.408784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:14.350 [2024-12-09 09:51:21.626009] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:14.350 [2024-12-09 09:51:21.658164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:14.609 09:51:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:14.609 09:51:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:39:14.609 00:39:14.609 09:51:21 json_config -- json_config/common.sh@26 -- # echo '' 00:39:14.609 09:51:21 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:39:14.609 INFO: Checking if target configuration is the same... 00:39:14.609 09:51:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:39:14.609 09:51:21 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:39:14.609 09:51:21 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:39:14.609 09:51:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:39:14.609 + '[' 2 -ne 2 ']' 00:39:14.609 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:39:14.609 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:39:14.609 + rootdir=/home/vagrant/spdk_repo/spdk 00:39:14.609 +++ basename /dev/fd/62 00:39:14.609 ++ mktemp /tmp/62.XXX 00:39:14.609 + tmp_file_1=/tmp/62.Ut5 00:39:14.609 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:39:14.609 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:39:14.609 + tmp_file_2=/tmp/spdk_tgt_config.json.yEF 00:39:14.609 + ret=0 00:39:14.609 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:39:15.177 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:39:15.177 + diff -u /tmp/62.Ut5 /tmp/spdk_tgt_config.json.yEF 00:39:15.177 + echo 'INFO: JSON config files are the same' 00:39:15.177 INFO: JSON config files are the same 00:39:15.177 + rm /tmp/62.Ut5 /tmp/spdk_tgt_config.json.yEF 00:39:15.177 + exit 0 00:39:15.177 09:51:22 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:39:15.177 INFO: changing configuration and checking if this can be detected... 00:39:15.177 09:51:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:39:15.177 09:51:22 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:39:15.177 09:51:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:39:15.436 09:51:22 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:39:15.436 09:51:22 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:39:15.436 09:51:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:39:15.436 + '[' 2 -ne 2 ']' 00:39:15.436 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:39:15.436 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:39:15.436 + rootdir=/home/vagrant/spdk_repo/spdk 00:39:15.436 +++ basename /dev/fd/62 00:39:15.436 ++ mktemp /tmp/62.XXX 00:39:15.436 + tmp_file_1=/tmp/62.e5C 00:39:15.436 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:39:15.436 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:39:15.436 + tmp_file_2=/tmp/spdk_tgt_config.json.gqs 00:39:15.436 + ret=0 00:39:15.436 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:39:15.695 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:39:15.954 + diff -u /tmp/62.e5C /tmp/spdk_tgt_config.json.gqs 00:39:15.954 + ret=1 00:39:15.954 + echo '=== Start of file: /tmp/62.e5C ===' 00:39:15.954 + cat /tmp/62.e5C 00:39:15.954 + echo '=== End of file: /tmp/62.e5C ===' 00:39:15.954 + echo '' 00:39:15.954 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gqs ===' 00:39:15.954 + cat /tmp/spdk_tgt_config.json.gqs 00:39:15.954 + echo '=== End of file: /tmp/spdk_tgt_config.json.gqs ===' 00:39:15.954 + echo '' 00:39:15.954 + rm /tmp/62.e5C /tmp/spdk_tgt_config.json.gqs 00:39:15.954 + exit 1 00:39:15.954 INFO: configuration change detected. 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:39:15.954 09:51:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:15.954 09:51:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@324 -- # [[ -n 57522 ]] 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:39:15.954 09:51:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:15.954 09:51:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@200 -- # uname -s 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:39:15.954 09:51:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:15.954 09:51:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:15.954 09:51:23 json_config -- json_config/json_config.sh@330 -- # killprocess 57522 00:39:15.954 09:51:23 json_config -- common/autotest_common.sh@954 -- # '[' -z 57522 ']' 00:39:15.954 09:51:23 json_config -- common/autotest_common.sh@958 -- # kill -0 57522 00:39:15.955 09:51:23 json_config -- common/autotest_common.sh@959 -- # uname 00:39:15.955 09:51:23 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:15.955 09:51:23 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57522 00:39:15.955 09:51:23 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:15.955 09:51:23 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:15.955 killing process with pid 57522 00:39:15.955 09:51:23 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57522' 00:39:15.955 09:51:23 json_config -- common/autotest_common.sh@973 -- # kill 57522 00:39:15.955 09:51:23 json_config -- common/autotest_common.sh@978 -- # wait 57522 00:39:16.213 09:51:23 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:39:16.213 09:51:23 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:39:16.213 09:51:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:16.213 09:51:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:16.213 09:51:23 json_config -- json_config/json_config.sh@335 -- # return 0 00:39:16.213 INFO: Success 00:39:16.213 09:51:23 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:39:16.213 00:39:16.213 real 0m9.075s 00:39:16.213 user 0m13.501s 00:39:16.213 sys 0m1.513s 00:39:16.213 09:51:23 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:16.213 ************************************ 00:39:16.213 09:51:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:39:16.213 END TEST json_config 00:39:16.213 ************************************ 00:39:16.213 09:51:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:39:16.213 09:51:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:16.213 09:51:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:16.213 09:51:23 -- common/autotest_common.sh@10 -- # set +x 00:39:16.213 ************************************ 00:39:16.213 START TEST json_config_extra_key 00:39:16.213 ************************************ 00:39:16.213 09:51:23 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:39:16.473 09:51:23 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:16.473 09:51:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:39:16.473 09:51:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:16.473 09:51:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:39:16.473 09:51:23 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.473 09:51:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:16.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.473 --rc genhtml_branch_coverage=1 00:39:16.473 --rc genhtml_function_coverage=1 00:39:16.473 --rc genhtml_legend=1 00:39:16.473 --rc geninfo_all_blocks=1 00:39:16.473 --rc geninfo_unexecuted_blocks=1 00:39:16.473 00:39:16.473 ' 00:39:16.473 09:51:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:16.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.473 --rc genhtml_branch_coverage=1 00:39:16.473 --rc genhtml_function_coverage=1 00:39:16.473 --rc genhtml_legend=1 00:39:16.473 --rc geninfo_all_blocks=1 00:39:16.473 --rc geninfo_unexecuted_blocks=1 00:39:16.473 00:39:16.473 ' 00:39:16.473 09:51:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:16.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.473 --rc genhtml_branch_coverage=1 00:39:16.473 --rc genhtml_function_coverage=1 00:39:16.473 --rc genhtml_legend=1 00:39:16.473 --rc geninfo_all_blocks=1 00:39:16.473 --rc geninfo_unexecuted_blocks=1 00:39:16.473 00:39:16.473 ' 00:39:16.473 09:51:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:16.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.473 --rc genhtml_branch_coverage=1 00:39:16.473 --rc genhtml_function_coverage=1 00:39:16.473 --rc genhtml_legend=1 00:39:16.473 --rc geninfo_all_blocks=1 00:39:16.473 --rc geninfo_unexecuted_blocks=1 00:39:16.473 00:39:16.473 ' 00:39:16.473 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.473 09:51:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.473 09:51:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.474 09:51:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.474 09:51:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.474 09:51:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.474 09:51:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.474 09:51:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:39:16.474 09:51:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:16.474 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.474 09:51:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:39:16.474 INFO: launching applications... 00:39:16.474 Waiting for target to run... 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:39:16.474 09:51:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57676 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57676 /var/tmp/spdk_tgt.sock 00:39:16.474 09:51:23 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57676 ']' 00:39:16.474 09:51:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:39:16.474 09:51:23 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:39:16.474 09:51:23 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:16.474 09:51:23 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:39:16.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:39:16.474 09:51:23 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:16.474 09:51:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:39:16.474 [2024-12-09 09:51:23.925923] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:16.474 [2024-12-09 09:51:23.926265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57676 ] 00:39:17.042 [2024-12-09 09:51:24.242759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.042 [2024-12-09 09:51:24.270760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.042 [2024-12-09 09:51:24.297078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:17.610 09:51:24 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:17.610 09:51:24 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:39:17.610 09:51:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:39:17.610 00:39:17.610 09:51:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:39:17.610 INFO: shutting down applications... 00:39:17.610 09:51:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:39:17.610 09:51:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:39:17.610 09:51:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:39:17.610 09:51:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57676 ]] 00:39:17.610 09:51:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57676 00:39:17.610 09:51:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:39:17.610 09:51:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:39:17.610 09:51:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57676 00:39:17.610 09:51:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:39:18.184 09:51:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:39:18.184 09:51:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:39:18.184 09:51:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57676 00:39:18.184 09:51:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:39:18.184 09:51:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:39:18.184 SPDK target shutdown done 00:39:18.184 Success 00:39:18.184 09:51:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:39:18.184 09:51:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:39:18.184 09:51:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:39:18.184 ************************************ 00:39:18.184 END TEST json_config_extra_key 00:39:18.184 ************************************ 00:39:18.184 00:39:18.184 real 0m1.828s 00:39:18.184 user 0m1.814s 00:39:18.184 sys 0m0.332s 00:39:18.184 09:51:25 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:18.184 09:51:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:39:18.184 09:51:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:39:18.184 09:51:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:18.184 09:51:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:18.184 09:51:25 -- common/autotest_common.sh@10 -- # set +x 00:39:18.184 ************************************ 00:39:18.184 START TEST alias_rpc 00:39:18.184 ************************************ 00:39:18.184 09:51:25 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:39:18.184 * Looking for test storage... 00:39:18.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:39:18.184 09:51:25 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:18.184 09:51:25 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:18.184 09:51:25 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:39:18.443 09:51:25 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:18.443 09:51:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:39:18.444 09:51:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:39:18.444 09:51:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:39:18.444 09:51:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:39:18.444 09:51:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:18.444 09:51:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:39:18.444 09:51:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:39:18.444 09:51:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:18.444 09:51:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:18.444 09:51:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:18.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.444 --rc genhtml_branch_coverage=1 00:39:18.444 --rc genhtml_function_coverage=1 00:39:18.444 --rc genhtml_legend=1 00:39:18.444 --rc geninfo_all_blocks=1 00:39:18.444 --rc geninfo_unexecuted_blocks=1 00:39:18.444 00:39:18.444 ' 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:18.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.444 --rc genhtml_branch_coverage=1 00:39:18.444 --rc genhtml_function_coverage=1 00:39:18.444 --rc genhtml_legend=1 00:39:18.444 --rc geninfo_all_blocks=1 00:39:18.444 --rc geninfo_unexecuted_blocks=1 00:39:18.444 00:39:18.444 ' 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:18.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.444 --rc genhtml_branch_coverage=1 00:39:18.444 --rc genhtml_function_coverage=1 00:39:18.444 --rc genhtml_legend=1 00:39:18.444 --rc geninfo_all_blocks=1 00:39:18.444 --rc geninfo_unexecuted_blocks=1 00:39:18.444 00:39:18.444 ' 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:18.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.444 --rc genhtml_branch_coverage=1 00:39:18.444 --rc genhtml_function_coverage=1 00:39:18.444 --rc genhtml_legend=1 00:39:18.444 --rc geninfo_all_blocks=1 00:39:18.444 --rc geninfo_unexecuted_blocks=1 00:39:18.444 00:39:18.444 ' 00:39:18.444 09:51:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:39:18.444 09:51:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57754 00:39:18.444 09:51:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:18.444 09:51:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57754 00:39:18.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57754 ']' 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:18.444 09:51:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:18.444 [2024-12-09 09:51:25.814558] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:18.444 [2024-12-09 09:51:25.814711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57754 ] 00:39:18.703 [2024-12-09 09:51:25.968172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.703 [2024-12-09 09:51:26.003068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:18.703 [2024-12-09 09:51:26.048334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:18.703 09:51:26 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:18.703 09:51:26 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:18.703 09:51:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:39:19.272 09:51:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57754 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57754 ']' 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57754 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57754 00:39:19.272 killing process with pid 57754 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57754' 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 57754 00:39:19.272 09:51:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 57754 00:39:19.532 ************************************ 00:39:19.532 END TEST alias_rpc 00:39:19.532 ************************************ 00:39:19.532 00:39:19.532 real 0m1.350s 00:39:19.532 user 0m1.605s 00:39:19.532 sys 0m0.353s 00:39:19.532 09:51:26 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:19.532 09:51:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:19.532 09:51:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:39:19.532 09:51:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:39:19.532 09:51:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:19.532 09:51:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:19.532 09:51:26 -- common/autotest_common.sh@10 -- # set +x 00:39:19.532 ************************************ 00:39:19.532 START TEST spdkcli_tcp 00:39:19.532 ************************************ 00:39:19.532 09:51:26 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:39:19.792 * Looking for test storage... 00:39:19.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:19.792 09:51:27 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:19.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.792 --rc genhtml_branch_coverage=1 00:39:19.792 --rc genhtml_function_coverage=1 00:39:19.792 --rc genhtml_legend=1 00:39:19.792 --rc geninfo_all_blocks=1 00:39:19.792 --rc geninfo_unexecuted_blocks=1 00:39:19.792 00:39:19.792 ' 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:19.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.792 --rc genhtml_branch_coverage=1 00:39:19.792 --rc genhtml_function_coverage=1 00:39:19.792 --rc genhtml_legend=1 00:39:19.792 --rc geninfo_all_blocks=1 00:39:19.792 --rc geninfo_unexecuted_blocks=1 00:39:19.792 00:39:19.792 ' 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:19.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.792 --rc genhtml_branch_coverage=1 00:39:19.792 --rc genhtml_function_coverage=1 00:39:19.792 --rc genhtml_legend=1 00:39:19.792 --rc geninfo_all_blocks=1 00:39:19.792 --rc geninfo_unexecuted_blocks=1 00:39:19.792 00:39:19.792 ' 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:19.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:19.792 --rc genhtml_branch_coverage=1 00:39:19.792 --rc genhtml_function_coverage=1 00:39:19.792 --rc genhtml_legend=1 00:39:19.792 --rc geninfo_all_blocks=1 00:39:19.792 --rc geninfo_unexecuted_blocks=1 00:39:19.792 00:39:19.792 ' 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57825 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:39:19.792 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57825 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57825 ']' 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:19.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:19.792 09:51:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:19.792 [2024-12-09 09:51:27.244143] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:19.792 [2024-12-09 09:51:27.244529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57825 ] 00:39:20.051 [2024-12-09 09:51:27.397917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:20.051 [2024-12-09 09:51:27.434842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:20.051 [2024-12-09 09:51:27.434856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.051 [2024-12-09 09:51:27.477564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:20.310 09:51:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:20.310 09:51:27 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:39:20.310 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57840 00:39:20.310 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:39:20.310 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:39:20.569 [ 00:39:20.569 "bdev_malloc_delete", 00:39:20.569 "bdev_malloc_create", 00:39:20.569 "bdev_null_resize", 00:39:20.569 "bdev_null_delete", 00:39:20.569 "bdev_null_create", 00:39:20.569 "bdev_nvme_cuse_unregister", 00:39:20.569 "bdev_nvme_cuse_register", 00:39:20.569 "bdev_opal_new_user", 00:39:20.569 "bdev_opal_set_lock_state", 00:39:20.569 "bdev_opal_delete", 00:39:20.569 "bdev_opal_get_info", 00:39:20.569 "bdev_opal_create", 00:39:20.569 "bdev_nvme_opal_revert", 00:39:20.569 "bdev_nvme_opal_init", 00:39:20.569 "bdev_nvme_send_cmd", 00:39:20.569 "bdev_nvme_set_keys", 00:39:20.569 "bdev_nvme_get_path_iostat", 00:39:20.569 "bdev_nvme_get_mdns_discovery_info", 00:39:20.569 "bdev_nvme_stop_mdns_discovery", 00:39:20.569 "bdev_nvme_start_mdns_discovery", 00:39:20.569 "bdev_nvme_set_multipath_policy", 00:39:20.569 "bdev_nvme_set_preferred_path", 00:39:20.569 "bdev_nvme_get_io_paths", 00:39:20.569 "bdev_nvme_remove_error_injection", 00:39:20.569 "bdev_nvme_add_error_injection", 00:39:20.569 "bdev_nvme_get_discovery_info", 00:39:20.569 "bdev_nvme_stop_discovery", 00:39:20.569 "bdev_nvme_start_discovery", 00:39:20.569 "bdev_nvme_get_controller_health_info", 00:39:20.569 "bdev_nvme_disable_controller", 00:39:20.569 "bdev_nvme_enable_controller", 00:39:20.569 "bdev_nvme_reset_controller", 00:39:20.569 "bdev_nvme_get_transport_statistics", 00:39:20.569 "bdev_nvme_apply_firmware", 00:39:20.569 "bdev_nvme_detach_controller", 00:39:20.569 "bdev_nvme_get_controllers", 00:39:20.569 "bdev_nvme_attach_controller", 00:39:20.569 "bdev_nvme_set_hotplug", 00:39:20.569 "bdev_nvme_set_options", 00:39:20.569 "bdev_passthru_delete", 00:39:20.569 "bdev_passthru_create", 00:39:20.569 "bdev_lvol_set_parent_bdev", 00:39:20.569 "bdev_lvol_set_parent", 00:39:20.569 "bdev_lvol_check_shallow_copy", 00:39:20.569 "bdev_lvol_start_shallow_copy", 00:39:20.569 "bdev_lvol_grow_lvstore", 00:39:20.569 "bdev_lvol_get_lvols", 00:39:20.569 "bdev_lvol_get_lvstores", 00:39:20.569 "bdev_lvol_delete", 00:39:20.569 "bdev_lvol_set_read_only", 00:39:20.569 "bdev_lvol_resize", 00:39:20.569 "bdev_lvol_decouple_parent", 00:39:20.569 "bdev_lvol_inflate", 00:39:20.569 "bdev_lvol_rename", 00:39:20.569 "bdev_lvol_clone_bdev", 00:39:20.569 "bdev_lvol_clone", 00:39:20.569 "bdev_lvol_snapshot", 00:39:20.569 "bdev_lvol_create", 00:39:20.569 "bdev_lvol_delete_lvstore", 00:39:20.569 "bdev_lvol_rename_lvstore", 00:39:20.569 "bdev_lvol_create_lvstore", 00:39:20.569 "bdev_raid_set_options", 00:39:20.569 "bdev_raid_remove_base_bdev", 00:39:20.569 "bdev_raid_add_base_bdev", 00:39:20.569 "bdev_raid_delete", 00:39:20.569 "bdev_raid_create", 00:39:20.569 "bdev_raid_get_bdevs", 00:39:20.569 "bdev_error_inject_error", 00:39:20.569 "bdev_error_delete", 00:39:20.569 "bdev_error_create", 00:39:20.569 "bdev_split_delete", 00:39:20.569 "bdev_split_create", 00:39:20.569 "bdev_delay_delete", 00:39:20.569 "bdev_delay_create", 00:39:20.569 "bdev_delay_update_latency", 00:39:20.569 "bdev_zone_block_delete", 00:39:20.569 "bdev_zone_block_create", 00:39:20.569 "blobfs_create", 00:39:20.569 "blobfs_detect", 00:39:20.569 "blobfs_set_cache_size", 00:39:20.569 "bdev_aio_delete", 00:39:20.569 "bdev_aio_rescan", 00:39:20.569 "bdev_aio_create", 00:39:20.569 "bdev_ftl_set_property", 00:39:20.569 "bdev_ftl_get_properties", 00:39:20.569 "bdev_ftl_get_stats", 00:39:20.569 "bdev_ftl_unmap", 00:39:20.569 "bdev_ftl_unload", 00:39:20.569 "bdev_ftl_delete", 00:39:20.569 "bdev_ftl_load", 00:39:20.569 "bdev_ftl_create", 00:39:20.569 "bdev_virtio_attach_controller", 00:39:20.569 "bdev_virtio_scsi_get_devices", 00:39:20.569 "bdev_virtio_detach_controller", 00:39:20.569 "bdev_virtio_blk_set_hotplug", 00:39:20.569 "bdev_iscsi_delete", 00:39:20.569 "bdev_iscsi_create", 00:39:20.569 "bdev_iscsi_set_options", 00:39:20.569 "bdev_uring_delete", 00:39:20.569 "bdev_uring_rescan", 00:39:20.569 "bdev_uring_create", 00:39:20.569 "accel_error_inject_error", 00:39:20.569 "ioat_scan_accel_module", 00:39:20.569 "dsa_scan_accel_module", 00:39:20.569 "iaa_scan_accel_module", 00:39:20.569 "keyring_file_remove_key", 00:39:20.569 "keyring_file_add_key", 00:39:20.569 "keyring_linux_set_options", 00:39:20.569 "fsdev_aio_delete", 00:39:20.569 "fsdev_aio_create", 00:39:20.569 "iscsi_get_histogram", 00:39:20.569 "iscsi_enable_histogram", 00:39:20.569 "iscsi_set_options", 00:39:20.569 "iscsi_get_auth_groups", 00:39:20.569 "iscsi_auth_group_remove_secret", 00:39:20.569 "iscsi_auth_group_add_secret", 00:39:20.569 "iscsi_delete_auth_group", 00:39:20.569 "iscsi_create_auth_group", 00:39:20.569 "iscsi_set_discovery_auth", 00:39:20.569 "iscsi_get_options", 00:39:20.569 "iscsi_target_node_request_logout", 00:39:20.569 "iscsi_target_node_set_redirect", 00:39:20.569 "iscsi_target_node_set_auth", 00:39:20.569 "iscsi_target_node_add_lun", 00:39:20.569 "iscsi_get_stats", 00:39:20.569 "iscsi_get_connections", 00:39:20.569 "iscsi_portal_group_set_auth", 00:39:20.569 "iscsi_start_portal_group", 00:39:20.569 "iscsi_delete_portal_group", 00:39:20.569 "iscsi_create_portal_group", 00:39:20.569 "iscsi_get_portal_groups", 00:39:20.569 "iscsi_delete_target_node", 00:39:20.569 "iscsi_target_node_remove_pg_ig_maps", 00:39:20.569 "iscsi_target_node_add_pg_ig_maps", 00:39:20.569 "iscsi_create_target_node", 00:39:20.569 "iscsi_get_target_nodes", 00:39:20.569 "iscsi_delete_initiator_group", 00:39:20.569 "iscsi_initiator_group_remove_initiators", 00:39:20.569 "iscsi_initiator_group_add_initiators", 00:39:20.569 "iscsi_create_initiator_group", 00:39:20.569 "iscsi_get_initiator_groups", 00:39:20.569 "nvmf_set_crdt", 00:39:20.569 "nvmf_set_config", 00:39:20.569 "nvmf_set_max_subsystems", 00:39:20.569 "nvmf_stop_mdns_prr", 00:39:20.569 "nvmf_publish_mdns_prr", 00:39:20.569 "nvmf_subsystem_get_listeners", 00:39:20.569 "nvmf_subsystem_get_qpairs", 00:39:20.569 "nvmf_subsystem_get_controllers", 00:39:20.569 "nvmf_get_stats", 00:39:20.569 "nvmf_get_transports", 00:39:20.569 "nvmf_create_transport", 00:39:20.569 "nvmf_get_targets", 00:39:20.569 "nvmf_delete_target", 00:39:20.569 "nvmf_create_target", 00:39:20.569 "nvmf_subsystem_allow_any_host", 00:39:20.569 "nvmf_subsystem_set_keys", 00:39:20.569 "nvmf_subsystem_remove_host", 00:39:20.569 "nvmf_subsystem_add_host", 00:39:20.569 "nvmf_ns_remove_host", 00:39:20.569 "nvmf_ns_add_host", 00:39:20.569 "nvmf_subsystem_remove_ns", 00:39:20.569 "nvmf_subsystem_set_ns_ana_group", 00:39:20.569 "nvmf_subsystem_add_ns", 00:39:20.569 "nvmf_subsystem_listener_set_ana_state", 00:39:20.569 "nvmf_discovery_get_referrals", 00:39:20.569 "nvmf_discovery_remove_referral", 00:39:20.569 "nvmf_discovery_add_referral", 00:39:20.569 "nvmf_subsystem_remove_listener", 00:39:20.569 "nvmf_subsystem_add_listener", 00:39:20.569 "nvmf_delete_subsystem", 00:39:20.569 "nvmf_create_subsystem", 00:39:20.569 "nvmf_get_subsystems", 00:39:20.569 "env_dpdk_get_mem_stats", 00:39:20.569 "nbd_get_disks", 00:39:20.569 "nbd_stop_disk", 00:39:20.569 "nbd_start_disk", 00:39:20.569 "ublk_recover_disk", 00:39:20.569 "ublk_get_disks", 00:39:20.569 "ublk_stop_disk", 00:39:20.569 "ublk_start_disk", 00:39:20.569 "ublk_destroy_target", 00:39:20.569 "ublk_create_target", 00:39:20.569 "virtio_blk_create_transport", 00:39:20.569 "virtio_blk_get_transports", 00:39:20.569 "vhost_controller_set_coalescing", 00:39:20.569 "vhost_get_controllers", 00:39:20.569 "vhost_delete_controller", 00:39:20.569 "vhost_create_blk_controller", 00:39:20.570 "vhost_scsi_controller_remove_target", 00:39:20.570 "vhost_scsi_controller_add_target", 00:39:20.570 "vhost_start_scsi_controller", 00:39:20.570 "vhost_create_scsi_controller", 00:39:20.570 "thread_set_cpumask", 00:39:20.570 "scheduler_set_options", 00:39:20.570 "framework_get_governor", 00:39:20.570 "framework_get_scheduler", 00:39:20.570 "framework_set_scheduler", 00:39:20.570 "framework_get_reactors", 00:39:20.570 "thread_get_io_channels", 00:39:20.570 "thread_get_pollers", 00:39:20.570 "thread_get_stats", 00:39:20.570 "framework_monitor_context_switch", 00:39:20.570 "spdk_kill_instance", 00:39:20.570 "log_enable_timestamps", 00:39:20.570 "log_get_flags", 00:39:20.570 "log_clear_flag", 00:39:20.570 "log_set_flag", 00:39:20.570 "log_get_level", 00:39:20.570 "log_set_level", 00:39:20.570 "log_get_print_level", 00:39:20.570 "log_set_print_level", 00:39:20.570 "framework_enable_cpumask_locks", 00:39:20.570 "framework_disable_cpumask_locks", 00:39:20.570 "framework_wait_init", 00:39:20.570 "framework_start_init", 00:39:20.570 "scsi_get_devices", 00:39:20.570 "bdev_get_histogram", 00:39:20.570 "bdev_enable_histogram", 00:39:20.570 "bdev_set_qos_limit", 00:39:20.570 "bdev_set_qd_sampling_period", 00:39:20.570 "bdev_get_bdevs", 00:39:20.570 "bdev_reset_iostat", 00:39:20.570 "bdev_get_iostat", 00:39:20.570 "bdev_examine", 00:39:20.570 "bdev_wait_for_examine", 00:39:20.570 "bdev_set_options", 00:39:20.570 "accel_get_stats", 00:39:20.570 "accel_set_options", 00:39:20.570 "accel_set_driver", 00:39:20.570 "accel_crypto_key_destroy", 00:39:20.570 "accel_crypto_keys_get", 00:39:20.570 "accel_crypto_key_create", 00:39:20.570 "accel_assign_opc", 00:39:20.570 "accel_get_module_info", 00:39:20.570 "accel_get_opc_assignments", 00:39:20.570 "vmd_rescan", 00:39:20.570 "vmd_remove_device", 00:39:20.570 "vmd_enable", 00:39:20.570 "sock_get_default_impl", 00:39:20.570 "sock_set_default_impl", 00:39:20.570 "sock_impl_set_options", 00:39:20.570 "sock_impl_get_options", 00:39:20.570 "iobuf_get_stats", 00:39:20.570 "iobuf_set_options", 00:39:20.570 "keyring_get_keys", 00:39:20.570 "framework_get_pci_devices", 00:39:20.570 "framework_get_config", 00:39:20.570 "framework_get_subsystems", 00:39:20.570 "fsdev_set_opts", 00:39:20.570 "fsdev_get_opts", 00:39:20.570 "trace_get_info", 00:39:20.570 "trace_get_tpoint_group_mask", 00:39:20.570 "trace_disable_tpoint_group", 00:39:20.570 "trace_enable_tpoint_group", 00:39:20.570 "trace_clear_tpoint_mask", 00:39:20.570 "trace_set_tpoint_mask", 00:39:20.570 "notify_get_notifications", 00:39:20.570 "notify_get_types", 00:39:20.570 "spdk_get_version", 00:39:20.570 "rpc_get_methods" 00:39:20.570 ] 00:39:20.570 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:39:20.570 09:51:27 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:20.570 09:51:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:20.570 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:20.570 09:51:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57825 00:39:20.570 09:51:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57825 ']' 00:39:20.570 09:51:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57825 00:39:20.570 09:51:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:39:20.570 09:51:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:20.570 09:51:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57825 00:39:20.570 09:51:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:20.570 09:51:27 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:20.570 killing process with pid 57825 00:39:20.570 09:51:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57825' 00:39:20.570 09:51:28 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57825 00:39:20.570 09:51:28 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57825 00:39:20.828 ************************************ 00:39:20.828 END TEST spdkcli_tcp 00:39:20.828 ************************************ 00:39:20.828 00:39:20.828 real 0m1.363s 00:39:20.828 user 0m2.336s 00:39:20.828 sys 0m0.387s 00:39:20.828 09:51:28 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:20.828 09:51:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:21.088 09:51:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:39:21.088 09:51:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:21.088 09:51:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:21.088 09:51:28 -- common/autotest_common.sh@10 -- # set +x 00:39:21.088 ************************************ 00:39:21.088 START TEST dpdk_mem_utility 00:39:21.088 ************************************ 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:39:21.088 * Looking for test storage... 00:39:21.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:21.088 09:51:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:21.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.088 --rc genhtml_branch_coverage=1 00:39:21.088 --rc genhtml_function_coverage=1 00:39:21.088 --rc genhtml_legend=1 00:39:21.088 --rc geninfo_all_blocks=1 00:39:21.088 --rc geninfo_unexecuted_blocks=1 00:39:21.088 00:39:21.088 ' 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:21.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.088 --rc genhtml_branch_coverage=1 00:39:21.088 --rc genhtml_function_coverage=1 00:39:21.088 --rc genhtml_legend=1 00:39:21.088 --rc geninfo_all_blocks=1 00:39:21.088 --rc geninfo_unexecuted_blocks=1 00:39:21.088 00:39:21.088 ' 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:21.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.088 --rc genhtml_branch_coverage=1 00:39:21.088 --rc genhtml_function_coverage=1 00:39:21.088 --rc genhtml_legend=1 00:39:21.088 --rc geninfo_all_blocks=1 00:39:21.088 --rc geninfo_unexecuted_blocks=1 00:39:21.088 00:39:21.088 ' 00:39:21.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:21.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.088 --rc genhtml_branch_coverage=1 00:39:21.088 --rc genhtml_function_coverage=1 00:39:21.088 --rc genhtml_legend=1 00:39:21.088 --rc geninfo_all_blocks=1 00:39:21.088 --rc geninfo_unexecuted_blocks=1 00:39:21.088 00:39:21.088 ' 00:39:21.088 09:51:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:39:21.088 09:51:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57916 00:39:21.088 09:51:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57916 00:39:21.088 09:51:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57916 ']' 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:21.088 09:51:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:39:21.347 [2024-12-09 09:51:28.626755] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:21.347 [2024-12-09 09:51:28.627556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57916 ] 00:39:21.347 [2024-12-09 09:51:28.783010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:21.347 [2024-12-09 09:51:28.829144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.606 [2024-12-09 09:51:28.879082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:21.606 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:21.606 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:39:21.606 09:51:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:39:21.606 09:51:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:39:21.606 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.606 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:39:21.606 { 00:39:21.606 "filename": "/tmp/spdk_mem_dump.txt" 00:39:21.606 } 00:39:21.606 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.606 09:51:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:39:21.866 DPDK memory size 818.000000 MiB in 1 heap(s) 00:39:21.866 1 heaps totaling size 818.000000 MiB 00:39:21.866 size: 818.000000 MiB heap id: 0 00:39:21.866 end heaps---------- 00:39:21.866 9 mempools totaling size 603.782043 MiB 00:39:21.866 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:39:21.866 size: 158.602051 MiB name: PDU_data_out_Pool 00:39:21.866 size: 100.555481 MiB name: bdev_io_57916 00:39:21.867 size: 50.003479 MiB name: msgpool_57916 00:39:21.867 size: 36.509338 MiB name: fsdev_io_57916 00:39:21.867 size: 21.763794 MiB name: PDU_Pool 00:39:21.867 size: 19.513306 MiB name: SCSI_TASK_Pool 00:39:21.867 size: 4.133484 MiB name: evtpool_57916 00:39:21.867 size: 0.026123 MiB name: Session_Pool 00:39:21.867 end mempools------- 00:39:21.867 6 memzones totaling size 4.142822 MiB 00:39:21.867 size: 1.000366 MiB name: RG_ring_0_57916 00:39:21.867 size: 1.000366 MiB name: RG_ring_1_57916 00:39:21.867 size: 1.000366 MiB name: RG_ring_4_57916 00:39:21.867 size: 1.000366 MiB name: RG_ring_5_57916 00:39:21.867 size: 0.125366 MiB name: RG_ring_2_57916 00:39:21.867 size: 0.015991 MiB name: RG_ring_3_57916 00:39:21.867 end memzones------- 00:39:21.867 09:51:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:39:21.867 heap id: 0 total size: 818.000000 MiB number of busy elements: 316 number of free elements: 15 00:39:21.867 list of free elements. size: 10.802673 MiB 00:39:21.867 element at address: 0x200019200000 with size: 0.999878 MiB 00:39:21.867 element at address: 0x200019400000 with size: 0.999878 MiB 00:39:21.867 element at address: 0x200032000000 with size: 0.994446 MiB 00:39:21.867 element at address: 0x200000400000 with size: 0.993958 MiB 00:39:21.867 element at address: 0x200006400000 with size: 0.959839 MiB 00:39:21.867 element at address: 0x200012c00000 with size: 0.944275 MiB 00:39:21.867 element at address: 0x200019600000 with size: 0.936584 MiB 00:39:21.867 element at address: 0x200000200000 with size: 0.717346 MiB 00:39:21.867 element at address: 0x20001ae00000 with size: 0.567871 MiB 00:39:21.867 element at address: 0x20000a600000 with size: 0.488892 MiB 00:39:21.867 element at address: 0x200000c00000 with size: 0.486267 MiB 00:39:21.867 element at address: 0x200019800000 with size: 0.485657 MiB 00:39:21.867 element at address: 0x200003e00000 with size: 0.480286 MiB 00:39:21.867 element at address: 0x200028200000 with size: 0.395752 MiB 00:39:21.867 element at address: 0x200000800000 with size: 0.351746 MiB 00:39:21.867 list of standard malloc elements. size: 199.268433 MiB 00:39:21.867 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:39:21.867 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:39:21.867 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:39:21.867 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:39:21.867 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:39:21.867 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:39:21.867 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:39:21.867 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:39:21.867 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:39:21.867 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000085e580 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087e840 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087e900 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087f080 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087f140 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087f200 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087f380 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087f440 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087f500 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x20000087f680 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:39:21.867 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:39:21.867 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200000cff000 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200003efb980 with size: 0.000183 MiB 00:39:21.868 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:39:21.868 element at address: 0x200028265500 with size: 0.000183 MiB 00:39:21.868 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c480 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c540 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c600 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c780 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c840 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c900 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d080 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d140 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d200 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d380 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d440 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d500 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d680 with size: 0.000183 MiB 00:39:21.868 element at address: 0x20002826d740 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826d800 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826d980 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826da40 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826db00 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826de00 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826df80 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e040 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e100 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e280 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e340 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e400 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e580 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e640 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e700 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e880 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826e940 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f000 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f180 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f240 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f300 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f480 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f540 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f600 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f780 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f840 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f900 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:39:21.869 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:39:21.869 list of memzone associated elements. size: 607.928894 MiB 00:39:21.869 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:39:21.869 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:39:21.869 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:39:21.869 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:39:21.869 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:39:21.869 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57916_0 00:39:21.869 element at address: 0x200000dff380 with size: 48.003052 MiB 00:39:21.869 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57916_0 00:39:21.869 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:39:21.869 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57916_0 00:39:21.869 element at address: 0x2000199be940 with size: 20.255554 MiB 00:39:21.869 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:39:21.869 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:39:21.869 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:39:21.869 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:39:21.869 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57916_0 00:39:21.869 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:39:21.869 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57916 00:39:21.869 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:39:21.869 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57916 00:39:21.869 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:39:21.869 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:39:21.869 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:39:21.869 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:39:21.869 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:39:21.869 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:39:21.869 element at address: 0x200003efba40 with size: 1.008118 MiB 00:39:21.869 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:39:21.869 element at address: 0x200000cff180 with size: 1.000488 MiB 00:39:21.869 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57916 00:39:21.869 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:39:21.869 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57916 00:39:21.869 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:39:21.869 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57916 00:39:21.869 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:39:21.869 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57916 00:39:21.869 element at address: 0x20000087f740 with size: 0.500488 MiB 00:39:21.869 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57916 00:39:21.869 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:39:21.869 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57916 00:39:21.869 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:39:21.869 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:39:21.869 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:39:21.869 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:39:21.869 element at address: 0x20001987c540 with size: 0.250488 MiB 00:39:21.869 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:39:21.869 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:39:21.869 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57916 00:39:21.869 element at address: 0x20000085e640 with size: 0.125488 MiB 00:39:21.869 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57916 00:39:21.869 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:39:21.869 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:39:21.869 element at address: 0x200028265680 with size: 0.023743 MiB 00:39:21.869 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:39:21.869 element at address: 0x20000085a380 with size: 0.016113 MiB 00:39:21.869 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57916 00:39:21.869 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:39:21.869 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:39:21.869 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:39:21.869 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57916 00:39:21.869 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:39:21.869 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57916 00:39:21.869 element at address: 0x20000085a180 with size: 0.000305 MiB 00:39:21.869 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57916 00:39:21.869 element at address: 0x20002826c280 with size: 0.000305 MiB 00:39:21.869 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:39:21.869 09:51:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:39:21.869 09:51:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57916 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57916 ']' 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57916 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57916 00:39:21.869 killing process with pid 57916 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57916' 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57916 00:39:21.869 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57916 00:39:22.128 ************************************ 00:39:22.128 END TEST dpdk_mem_utility 00:39:22.128 ************************************ 00:39:22.128 00:39:22.128 real 0m1.182s 00:39:22.128 user 0m1.248s 00:39:22.128 sys 0m0.344s 00:39:22.128 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:22.128 09:51:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:39:22.128 09:51:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:39:22.128 09:51:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:22.128 09:51:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.128 09:51:29 -- common/autotest_common.sh@10 -- # set +x 00:39:22.128 ************************************ 00:39:22.128 START TEST event 00:39:22.128 ************************************ 00:39:22.128 09:51:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:39:22.387 * Looking for test storage... 00:39:22.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1711 -- # lcov --version 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:22.387 09:51:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:22.387 09:51:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:22.387 09:51:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:22.387 09:51:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:39:22.387 09:51:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:39:22.387 09:51:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:39:22.387 09:51:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:39:22.387 09:51:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:39:22.387 09:51:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:39:22.387 09:51:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:39:22.387 09:51:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:22.387 09:51:29 event -- scripts/common.sh@344 -- # case "$op" in 00:39:22.387 09:51:29 event -- scripts/common.sh@345 -- # : 1 00:39:22.387 09:51:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:22.387 09:51:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:22.387 09:51:29 event -- scripts/common.sh@365 -- # decimal 1 00:39:22.387 09:51:29 event -- scripts/common.sh@353 -- # local d=1 00:39:22.387 09:51:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:22.387 09:51:29 event -- scripts/common.sh@355 -- # echo 1 00:39:22.387 09:51:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:39:22.387 09:51:29 event -- scripts/common.sh@366 -- # decimal 2 00:39:22.387 09:51:29 event -- scripts/common.sh@353 -- # local d=2 00:39:22.387 09:51:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:22.387 09:51:29 event -- scripts/common.sh@355 -- # echo 2 00:39:22.387 09:51:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:39:22.387 09:51:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:22.387 09:51:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:22.387 09:51:29 event -- scripts/common.sh@368 -- # return 0 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:22.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.387 --rc genhtml_branch_coverage=1 00:39:22.387 --rc genhtml_function_coverage=1 00:39:22.387 --rc genhtml_legend=1 00:39:22.387 --rc geninfo_all_blocks=1 00:39:22.387 --rc geninfo_unexecuted_blocks=1 00:39:22.387 00:39:22.387 ' 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:22.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.387 --rc genhtml_branch_coverage=1 00:39:22.387 --rc genhtml_function_coverage=1 00:39:22.387 --rc genhtml_legend=1 00:39:22.387 --rc geninfo_all_blocks=1 00:39:22.387 --rc geninfo_unexecuted_blocks=1 00:39:22.387 00:39:22.387 ' 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:22.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.387 --rc genhtml_branch_coverage=1 00:39:22.387 --rc genhtml_function_coverage=1 00:39:22.387 --rc genhtml_legend=1 00:39:22.387 --rc geninfo_all_blocks=1 00:39:22.387 --rc geninfo_unexecuted_blocks=1 00:39:22.387 00:39:22.387 ' 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:22.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.387 --rc genhtml_branch_coverage=1 00:39:22.387 --rc genhtml_function_coverage=1 00:39:22.387 --rc genhtml_legend=1 00:39:22.387 --rc geninfo_all_blocks=1 00:39:22.387 --rc geninfo_unexecuted_blocks=1 00:39:22.387 00:39:22.387 ' 00:39:22.387 09:51:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:39:22.387 09:51:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:39:22.387 09:51:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:39:22.387 09:51:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.387 09:51:29 event -- common/autotest_common.sh@10 -- # set +x 00:39:22.387 ************************************ 00:39:22.387 START TEST event_perf 00:39:22.387 ************************************ 00:39:22.388 09:51:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:39:22.388 Running I/O for 1 seconds...[2024-12-09 09:51:29.827505] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:22.388 [2024-12-09 09:51:29.827727] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57994 ] 00:39:22.647 [2024-12-09 09:51:29.978221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:22.647 [2024-12-09 09:51:30.018163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:22.647 [2024-12-09 09:51:30.018320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.647 Running I/O for 1 seconds...[2024-12-09 09:51:30.018255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:22.647 [2024-12-09 09:51:30.018320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:24.021 00:39:24.021 lcore 0: 179752 00:39:24.021 lcore 1: 179753 00:39:24.021 lcore 2: 179752 00:39:24.021 lcore 3: 179752 00:39:24.021 done. 00:39:24.021 ************************************ 00:39:24.021 END TEST event_perf 00:39:24.021 ************************************ 00:39:24.021 00:39:24.021 real 0m1.310s 00:39:24.021 user 0m4.131s 00:39:24.021 sys 0m0.053s 00:39:24.021 09:51:31 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.021 09:51:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:39:24.021 09:51:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:39:24.021 09:51:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:24.021 09:51:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.021 09:51:31 event -- common/autotest_common.sh@10 -- # set +x 00:39:24.021 ************************************ 00:39:24.021 START TEST event_reactor 00:39:24.021 ************************************ 00:39:24.021 09:51:31 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:39:24.021 [2024-12-09 09:51:31.191908] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:24.021 [2024-12-09 09:51:31.192268] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58027 ] 00:39:24.021 [2024-12-09 09:51:31.347065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.021 [2024-12-09 09:51:31.382526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.398 test_start 00:39:25.398 oneshot 00:39:25.398 tick 100 00:39:25.398 tick 100 00:39:25.398 tick 250 00:39:25.398 tick 100 00:39:25.398 tick 100 00:39:25.398 tick 100 00:39:25.398 tick 250 00:39:25.398 tick 500 00:39:25.398 tick 100 00:39:25.398 tick 100 00:39:25.398 tick 250 00:39:25.398 tick 100 00:39:25.398 tick 100 00:39:25.398 test_end 00:39:25.398 ************************************ 00:39:25.398 END TEST event_reactor 00:39:25.398 ************************************ 00:39:25.398 00:39:25.398 real 0m1.305s 00:39:25.398 user 0m1.161s 00:39:25.398 sys 0m0.038s 00:39:25.398 09:51:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:25.398 09:51:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:39:25.398 09:51:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:39:25.398 09:51:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:25.398 09:51:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:25.398 09:51:32 event -- common/autotest_common.sh@10 -- # set +x 00:39:25.398 ************************************ 00:39:25.398 START TEST event_reactor_perf 00:39:25.398 ************************************ 00:39:25.398 09:51:32 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:39:25.398 [2024-12-09 09:51:32.549740] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:25.398 [2024-12-09 09:51:32.549859] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58057 ] 00:39:25.398 [2024-12-09 09:51:32.708640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.398 [2024-12-09 09:51:32.750931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.775 test_start 00:39:26.775 test_end 00:39:26.775 Performance: 322819 events per second 00:39:26.775 00:39:26.775 real 0m1.319s 00:39:26.775 user 0m1.179s 00:39:26.775 sys 0m0.031s 00:39:26.775 09:51:33 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.775 ************************************ 00:39:26.775 END TEST event_reactor_perf 00:39:26.775 ************************************ 00:39:26.776 09:51:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:39:26.776 09:51:33 event -- event/event.sh@49 -- # uname -s 00:39:26.776 09:51:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:39:26.776 09:51:33 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:39:26.776 09:51:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:26.776 09:51:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:26.776 09:51:33 event -- common/autotest_common.sh@10 -- # set +x 00:39:26.776 ************************************ 00:39:26.776 START TEST event_scheduler 00:39:26.776 ************************************ 00:39:26.776 09:51:33 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:39:26.776 * Looking for test storage... 00:39:26.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:39:26.776 09:51:33 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:26.776 09:51:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:39:26.776 09:51:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:39:26.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:26.776 09:51:34 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.776 --rc genhtml_branch_coverage=1 00:39:26.776 --rc genhtml_function_coverage=1 00:39:26.776 --rc genhtml_legend=1 00:39:26.776 --rc geninfo_all_blocks=1 00:39:26.776 --rc geninfo_unexecuted_blocks=1 00:39:26.776 00:39:26.776 ' 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.776 --rc genhtml_branch_coverage=1 00:39:26.776 --rc genhtml_function_coverage=1 00:39:26.776 --rc genhtml_legend=1 00:39:26.776 --rc geninfo_all_blocks=1 00:39:26.776 --rc geninfo_unexecuted_blocks=1 00:39:26.776 00:39:26.776 ' 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.776 --rc genhtml_branch_coverage=1 00:39:26.776 --rc genhtml_function_coverage=1 00:39:26.776 --rc genhtml_legend=1 00:39:26.776 --rc geninfo_all_blocks=1 00:39:26.776 --rc geninfo_unexecuted_blocks=1 00:39:26.776 00:39:26.776 ' 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.776 --rc genhtml_branch_coverage=1 00:39:26.776 --rc genhtml_function_coverage=1 00:39:26.776 --rc genhtml_legend=1 00:39:26.776 --rc geninfo_all_blocks=1 00:39:26.776 --rc geninfo_unexecuted_blocks=1 00:39:26.776 00:39:26.776 ' 00:39:26.776 09:51:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:39:26.776 09:51:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58132 00:39:26.776 09:51:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:39:26.776 09:51:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:39:26.776 09:51:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58132 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58132 ']' 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:26.776 09:51:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:39:26.776 [2024-12-09 09:51:34.143847] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:26.776 [2024-12-09 09:51:34.143941] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58132 ] 00:39:27.035 [2024-12-09 09:51:34.295596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:27.035 [2024-12-09 09:51:34.342298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.035 [2024-12-09 09:51:34.342416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:27.035 [2024-12-09 09:51:34.342527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:27.035 [2024-12-09 09:51:34.342528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:39:27.035 09:51:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:39:27.035 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:39:27.035 POWER: Cannot set governor of lcore 0 to userspace 00:39:27.035 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:39:27.035 POWER: Cannot set governor of lcore 0 to performance 00:39:27.035 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:39:27.035 POWER: Cannot set governor of lcore 0 to userspace 00:39:27.035 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:39:27.035 POWER: Cannot set governor of lcore 0 to userspace 00:39:27.035 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:39:27.035 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:39:27.035 POWER: Unable to set Power Management Environment for lcore 0 00:39:27.035 [2024-12-09 09:51:34.433228] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:39:27.035 [2024-12-09 09:51:34.433409] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:39:27.035 [2024-12-09 09:51:34.433480] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:39:27.035 [2024-12-09 09:51:34.433631] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:39:27.035 [2024-12-09 09:51:34.433784] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:39:27.035 [2024-12-09 09:51:34.433842] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.035 09:51:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:39:27.035 [2024-12-09 09:51:34.473241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:27.035 [2024-12-09 09:51:34.494202] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.035 09:51:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:27.035 09:51:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:39:27.035 ************************************ 00:39:27.035 START TEST scheduler_create_thread 00:39:27.035 ************************************ 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.035 2 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.035 3 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.035 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 4 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 5 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 6 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 7 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 8 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 9 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 10 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.293 09:51:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.859 ************************************ 00:39:27.859 END TEST scheduler_create_thread 00:39:27.859 ************************************ 00:39:27.859 09:51:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.859 00:39:27.859 real 0m0.591s 00:39:27.859 user 0m0.018s 00:39:27.859 sys 0m0.001s 00:39:27.859 09:51:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:27.859 09:51:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:39:27.859 09:51:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:39:27.859 09:51:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58132 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58132 ']' 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58132 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58132 00:39:27.859 killing process with pid 58132 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58132' 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58132 00:39:27.859 09:51:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58132 00:39:28.117 [2024-12-09 09:51:35.576120] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:39:28.375 ************************************ 00:39:28.375 END TEST event_scheduler 00:39:28.375 ************************************ 00:39:28.375 00:39:28.375 real 0m1.855s 00:39:28.375 user 0m2.313s 00:39:28.375 sys 0m0.305s 00:39:28.375 09:51:35 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.375 09:51:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:39:28.375 09:51:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:39:28.375 09:51:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:39:28.375 09:51:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:28.375 09:51:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:28.375 09:51:35 event -- common/autotest_common.sh@10 -- # set +x 00:39:28.375 ************************************ 00:39:28.375 START TEST app_repeat 00:39:28.375 ************************************ 00:39:28.375 09:51:35 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58202 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:39:28.375 Process app_repeat pid: 58202 00:39:28.375 spdk_app_start Round 0 00:39:28.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58202' 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:39:28.375 09:51:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58202 /var/tmp/spdk-nbd.sock 00:39:28.375 09:51:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58202 ']' 00:39:28.375 09:51:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:28.375 09:51:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:28.375 09:51:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:28.375 09:51:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:28.375 09:51:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:39:28.375 [2024-12-09 09:51:35.847784] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:28.375 [2024-12-09 09:51:35.848126] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58202 ] 00:39:28.632 [2024-12-09 09:51:36.005492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:28.632 [2024-12-09 09:51:36.048247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:28.632 [2024-12-09 09:51:36.048258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.632 [2024-12-09 09:51:36.084826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:28.891 09:51:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:28.891 09:51:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:39:28.891 09:51:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:39:29.149 Malloc0 00:39:29.149 09:51:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:39:29.406 Malloc1 00:39:29.406 09:51:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:29.406 09:51:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:39:29.664 /dev/nbd0 00:39:29.664 09:51:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:29.664 09:51:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:39:29.664 1+0 records in 00:39:29.664 1+0 records out 00:39:29.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294806 s, 13.9 MB/s 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:29.664 09:51:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:39:29.664 09:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:29.664 09:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:29.664 09:51:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:39:29.922 /dev/nbd1 00:39:30.180 09:51:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:30.180 09:51:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:39:30.180 1+0 records in 00:39:30.180 1+0 records out 00:39:30.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024978 s, 16.4 MB/s 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:30.180 09:51:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:39:30.180 09:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:30.180 09:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:30.180 09:51:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:30.180 09:51:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:30.180 09:51:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:30.439 { 00:39:30.439 "nbd_device": "/dev/nbd0", 00:39:30.439 "bdev_name": "Malloc0" 00:39:30.439 }, 00:39:30.439 { 00:39:30.439 "nbd_device": "/dev/nbd1", 00:39:30.439 "bdev_name": "Malloc1" 00:39:30.439 } 00:39:30.439 ]' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:30.439 { 00:39:30.439 "nbd_device": "/dev/nbd0", 00:39:30.439 "bdev_name": "Malloc0" 00:39:30.439 }, 00:39:30.439 { 00:39:30.439 "nbd_device": "/dev/nbd1", 00:39:30.439 "bdev_name": "Malloc1" 00:39:30.439 } 00:39:30.439 ]' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:39:30.439 /dev/nbd1' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:39:30.439 /dev/nbd1' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:39:30.439 256+0 records in 00:39:30.439 256+0 records out 00:39:30.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00775643 s, 135 MB/s 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:30.439 256+0 records in 00:39:30.439 256+0 records out 00:39:30.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223424 s, 46.9 MB/s 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:39:30.439 256+0 records in 00:39:30.439 256+0 records out 00:39:30.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251149 s, 41.8 MB/s 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:30.439 09:51:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:31.013 09:51:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:31.272 09:51:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:31.531 09:51:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:39:31.531 09:51:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:39:31.790 09:51:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:39:32.050 [2024-12-09 09:51:39.313922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:32.050 [2024-12-09 09:51:39.349257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:32.050 [2024-12-09 09:51:39.349269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.050 [2024-12-09 09:51:39.382302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:32.050 [2024-12-09 09:51:39.382398] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:39:32.050 [2024-12-09 09:51:39.382411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:39:35.344 09:51:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:39:35.344 spdk_app_start Round 1 00:39:35.344 09:51:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:39:35.344 09:51:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58202 /var/tmp/spdk-nbd.sock 00:39:35.344 09:51:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58202 ']' 00:39:35.344 09:51:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:35.344 09:51:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:35.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:35.344 09:51:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:35.344 09:51:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:35.344 09:51:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:39:35.344 09:51:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:35.344 09:51:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:39:35.344 09:51:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:39:35.344 Malloc0 00:39:35.344 09:51:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:39:35.653 Malloc1 00:39:35.653 09:51:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:39:35.653 09:51:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:35.653 09:51:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:39:35.653 09:51:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:35.654 09:51:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:39:35.933 /dev/nbd0 00:39:35.933 09:51:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:35.933 09:51:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:39:35.933 1+0 records in 00:39:35.933 1+0 records out 00:39:35.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227168 s, 18.0 MB/s 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:35.933 09:51:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:39:35.933 09:51:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:35.933 09:51:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:35.933 09:51:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:39:36.200 /dev/nbd1 00:39:36.200 09:51:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:36.200 09:51:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:39:36.200 1+0 records in 00:39:36.200 1+0 records out 00:39:36.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362261 s, 11.3 MB/s 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:36.200 09:51:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:36.458 09:51:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:39:36.458 09:51:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:36.458 09:51:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:36.459 09:51:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:36.459 09:51:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:36.459 09:51:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:36.717 { 00:39:36.717 "nbd_device": "/dev/nbd0", 00:39:36.717 "bdev_name": "Malloc0" 00:39:36.717 }, 00:39:36.717 { 00:39:36.717 "nbd_device": "/dev/nbd1", 00:39:36.717 "bdev_name": "Malloc1" 00:39:36.717 } 00:39:36.717 ]' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:36.717 { 00:39:36.717 "nbd_device": "/dev/nbd0", 00:39:36.717 "bdev_name": "Malloc0" 00:39:36.717 }, 00:39:36.717 { 00:39:36.717 "nbd_device": "/dev/nbd1", 00:39:36.717 "bdev_name": "Malloc1" 00:39:36.717 } 00:39:36.717 ]' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:39:36.717 /dev/nbd1' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:39:36.717 /dev/nbd1' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:39:36.717 256+0 records in 00:39:36.717 256+0 records out 00:39:36.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00737839 s, 142 MB/s 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:36.717 256+0 records in 00:39:36.717 256+0 records out 00:39:36.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254023 s, 41.3 MB/s 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:39:36.717 256+0 records in 00:39:36.717 256+0 records out 00:39:36.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285896 s, 36.7 MB/s 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:36.717 09:51:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:36.975 09:51:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:37.540 09:51:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:37.798 09:51:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:39:37.798 09:51:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:39:38.056 09:51:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:39:38.314 [2024-12-09 09:51:45.570763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:38.314 [2024-12-09 09:51:45.602504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.314 [2024-12-09 09:51:45.602517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.314 [2024-12-09 09:51:45.634960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:38.314 [2024-12-09 09:51:45.635081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:39:38.314 [2024-12-09 09:51:45.635098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:39:41.598 spdk_app_start Round 2 00:39:41.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:41.598 09:51:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:39:41.598 09:51:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:39:41.598 09:51:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58202 /var/tmp/spdk-nbd.sock 00:39:41.598 09:51:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58202 ']' 00:39:41.598 09:51:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:41.598 09:51:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:41.598 09:51:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:41.598 09:51:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:41.598 09:51:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:39:41.598 09:51:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:41.598 09:51:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:39:41.598 09:51:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:39:41.598 Malloc0 00:39:41.598 09:51:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:39:41.858 Malloc1 00:39:41.858 09:51:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:41.858 09:51:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:39:42.117 /dev/nbd0 00:39:42.117 09:51:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:42.117 09:51:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:39:42.117 1+0 records in 00:39:42.117 1+0 records out 00:39:42.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285782 s, 14.3 MB/s 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:42.117 09:51:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:39:42.117 09:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:42.117 09:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:42.117 09:51:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:39:42.683 /dev/nbd1 00:39:42.683 09:51:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:42.683 09:51:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:39:42.683 1+0 records in 00:39:42.683 1+0 records out 00:39:42.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320037 s, 12.8 MB/s 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:42.683 09:51:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:39:42.683 09:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:42.683 09:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:42.683 09:51:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:42.683 09:51:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:42.683 09:51:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:42.941 { 00:39:42.941 "nbd_device": "/dev/nbd0", 00:39:42.941 "bdev_name": "Malloc0" 00:39:42.941 }, 00:39:42.941 { 00:39:42.941 "nbd_device": "/dev/nbd1", 00:39:42.941 "bdev_name": "Malloc1" 00:39:42.941 } 00:39:42.941 ]' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:42.941 { 00:39:42.941 "nbd_device": "/dev/nbd0", 00:39:42.941 "bdev_name": "Malloc0" 00:39:42.941 }, 00:39:42.941 { 00:39:42.941 "nbd_device": "/dev/nbd1", 00:39:42.941 "bdev_name": "Malloc1" 00:39:42.941 } 00:39:42.941 ]' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:39:42.941 /dev/nbd1' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:39:42.941 /dev/nbd1' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:39:42.941 256+0 records in 00:39:42.941 256+0 records out 00:39:42.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743049 s, 141 MB/s 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:42.941 256+0 records in 00:39:42.941 256+0 records out 00:39:42.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213844 s, 49.0 MB/s 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:39:42.941 256+0 records in 00:39:42.941 256+0 records out 00:39:42.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233588 s, 44.9 MB/s 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:42.941 09:51:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:43.199 09:51:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:43.199 09:51:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:43.199 09:51:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:43.199 09:51:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:43.199 09:51:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:43.199 09:51:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:43.457 09:51:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:39:43.457 09:51:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:39:43.457 09:51:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:43.457 09:51:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:43.713 09:51:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:43.713 09:51:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:43.713 09:51:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:43.713 09:51:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:43.713 09:51:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:43.714 09:51:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:43.714 09:51:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:39:43.714 09:51:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:39:43.714 09:51:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:43.714 09:51:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:43.714 09:51:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:43.971 09:51:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:39:43.971 09:51:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:39:44.229 09:51:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:39:44.229 [2024-12-09 09:51:51.669804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:44.229 [2024-12-09 09:51:51.702459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:44.229 [2024-12-09 09:51:51.702471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:44.487 [2024-12-09 09:51:51.735644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:44.487 [2024-12-09 09:51:51.735722] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:39:44.487 [2024-12-09 09:51:51.735737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:39:47.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:47.767 09:51:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58202 /var/tmp/spdk-nbd.sock 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58202 ']' 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:39:47.767 09:51:54 event.app_repeat -- event/event.sh@39 -- # killprocess 58202 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58202 ']' 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58202 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58202 00:39:47.767 killing process with pid 58202 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58202' 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58202 00:39:47.767 09:51:54 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58202 00:39:47.767 spdk_app_start is called in Round 0. 00:39:47.767 Shutdown signal received, stop current app iteration 00:39:47.767 Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 reinitialization... 00:39:47.767 spdk_app_start is called in Round 1. 00:39:47.767 Shutdown signal received, stop current app iteration 00:39:47.767 Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 reinitialization... 00:39:47.767 spdk_app_start is called in Round 2. 00:39:47.767 Shutdown signal received, stop current app iteration 00:39:47.767 Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 reinitialization... 00:39:47.767 spdk_app_start is called in Round 3. 00:39:47.767 Shutdown signal received, stop current app iteration 00:39:47.767 09:51:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:39:47.767 09:51:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:39:47.767 00:39:47.767 real 0m19.231s 00:39:47.767 user 0m44.356s 00:39:47.767 sys 0m2.726s 00:39:47.767 09:51:55 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:47.767 09:51:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:39:47.767 ************************************ 00:39:47.767 END TEST app_repeat 00:39:47.767 ************************************ 00:39:47.767 09:51:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:39:47.767 09:51:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:39:47.767 09:51:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:47.767 09:51:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:47.767 09:51:55 event -- common/autotest_common.sh@10 -- # set +x 00:39:47.767 ************************************ 00:39:47.767 START TEST cpu_locks 00:39:47.767 ************************************ 00:39:47.767 09:51:55 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:39:47.767 * Looking for test storage... 00:39:47.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:39:47.767 09:51:55 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:47.767 09:51:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:39:47.767 09:51:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:47.767 09:51:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:47.767 09:51:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:39:47.767 09:51:55 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:47.767 09:51:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:47.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.767 --rc genhtml_branch_coverage=1 00:39:47.767 --rc genhtml_function_coverage=1 00:39:47.767 --rc genhtml_legend=1 00:39:47.767 --rc geninfo_all_blocks=1 00:39:47.767 --rc geninfo_unexecuted_blocks=1 00:39:47.767 00:39:47.767 ' 00:39:47.767 09:51:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:47.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.767 --rc genhtml_branch_coverage=1 00:39:47.767 --rc genhtml_function_coverage=1 00:39:47.767 --rc genhtml_legend=1 00:39:47.767 --rc geninfo_all_blocks=1 00:39:47.767 --rc geninfo_unexecuted_blocks=1 00:39:47.767 00:39:47.767 ' 00:39:47.767 09:51:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:47.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.767 --rc genhtml_branch_coverage=1 00:39:47.767 --rc genhtml_function_coverage=1 00:39:47.768 --rc genhtml_legend=1 00:39:47.768 --rc geninfo_all_blocks=1 00:39:47.768 --rc geninfo_unexecuted_blocks=1 00:39:47.768 00:39:47.768 ' 00:39:47.768 09:51:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:47.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.768 --rc genhtml_branch_coverage=1 00:39:47.768 --rc genhtml_function_coverage=1 00:39:47.768 --rc genhtml_legend=1 00:39:47.768 --rc geninfo_all_blocks=1 00:39:47.768 --rc geninfo_unexecuted_blocks=1 00:39:47.768 00:39:47.768 ' 00:39:47.768 09:51:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:39:47.768 09:51:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:39:47.768 09:51:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:39:47.768 09:51:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:39:48.026 09:51:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:48.026 09:51:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:48.026 09:51:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:48.026 ************************************ 00:39:48.026 START TEST default_locks 00:39:48.026 ************************************ 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58641 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58641 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58641 ']' 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.026 09:51:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:39:48.026 [2024-12-09 09:51:55.346033] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:48.026 [2024-12-09 09:51:55.346153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58641 ] 00:39:48.026 [2024-12-09 09:51:55.502666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.284 [2024-12-09 09:51:55.542746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:48.284 [2024-12-09 09:51:55.588875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:49.217 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.217 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:39:49.217 09:51:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58641 00:39:49.217 09:51:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:39:49.217 09:51:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58641 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58641 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58641 ']' 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58641 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58641 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:49.475 killing process with pid 58641 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58641' 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58641 00:39:49.475 09:51:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58641 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58641 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58641 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58641 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58641 ']' 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:49.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:39:49.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58641) - No such process 00:39:49.733 ERROR: process (pid: 58641) is no longer running 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:39:49.733 00:39:49.733 real 0m1.858s 00:39:49.733 user 0m2.154s 00:39:49.733 sys 0m0.481s 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:49.733 09:51:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:39:49.733 ************************************ 00:39:49.733 END TEST default_locks 00:39:49.733 ************************************ 00:39:49.733 09:51:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:39:49.733 09:51:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:49.733 09:51:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:49.733 09:51:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:49.733 ************************************ 00:39:49.733 START TEST default_locks_via_rpc 00:39:49.733 ************************************ 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58687 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58687 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58687 ']' 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:39:49.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.733 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:49.991 [2024-12-09 09:51:57.245587] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:49.991 [2024-12-09 09:51:57.245676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58687 ] 00:39:49.991 [2024-12-09 09:51:57.388823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.991 [2024-12-09 09:51:57.421492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.991 [2024-12-09 09:51:57.462238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58687 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58687 00:39:50.250 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58687 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58687 ']' 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58687 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58687 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:50.509 killing process with pid 58687 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58687' 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58687 00:39:50.509 09:51:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58687 00:39:51.076 00:39:51.076 real 0m1.097s 00:39:51.076 user 0m1.159s 00:39:51.076 sys 0m0.390s 00:39:51.076 09:51:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:51.076 09:51:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:51.076 ************************************ 00:39:51.076 END TEST default_locks_via_rpc 00:39:51.076 ************************************ 00:39:51.076 09:51:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:39:51.076 09:51:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:51.076 09:51:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:51.076 09:51:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:51.076 ************************************ 00:39:51.076 START TEST non_locking_app_on_locked_coremask 00:39:51.076 ************************************ 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58731 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58731 /var/tmp/spdk.sock 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58731 ']' 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:51.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:51.076 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:51.076 [2024-12-09 09:51:58.408043] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:51.076 [2024-12-09 09:51:58.408146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58731 ] 00:39:51.076 [2024-12-09 09:51:58.560125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.335 [2024-12-09 09:51:58.596174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.335 [2024-12-09 09:51:58.640524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58739 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58739 /var/tmp/spdk2.sock 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58739 ']' 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:51.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:51.335 09:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:51.593 [2024-12-09 09:51:58.851821] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:51.593 [2024-12-09 09:51:58.851927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58739 ] 00:39:51.593 [2024-12-09 09:51:59.030583] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:39:51.593 [2024-12-09 09:51:59.030630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.851 [2024-12-09 09:51:59.102680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.851 [2024-12-09 09:51:59.189499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:52.784 09:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:52.784 09:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:39:52.784 09:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58731 00:39:52.784 09:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58731 00:39:52.784 09:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58731 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58731 ']' 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58731 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58731 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:53.350 killing process with pid 58731 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58731' 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58731 00:39:53.350 09:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58731 00:39:53.915 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58739 00:39:53.915 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58739 ']' 00:39:53.915 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58739 00:39:53.916 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:39:53.916 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:53.916 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58739 00:39:53.916 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:53.916 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:53.916 killing process with pid 58739 00:39:53.916 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58739' 00:39:53.916 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58739 00:39:53.916 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58739 00:39:54.173 00:39:54.173 real 0m3.313s 00:39:54.173 user 0m3.891s 00:39:54.173 sys 0m0.898s 00:39:54.173 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:54.173 09:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:54.173 ************************************ 00:39:54.173 END TEST non_locking_app_on_locked_coremask 00:39:54.173 ************************************ 00:39:54.431 09:52:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:39:54.431 09:52:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:54.431 09:52:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:54.431 09:52:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:54.431 ************************************ 00:39:54.431 START TEST locking_app_on_unlocked_coremask 00:39:54.431 ************************************ 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58806 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58806 /var/tmp/spdk.sock 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58806 ']' 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:54.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:54.431 09:52:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:54.431 [2024-12-09 09:52:01.757615] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:54.431 [2024-12-09 09:52:01.757731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58806 ] 00:39:54.431 [2024-12-09 09:52:01.913931] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:39:54.431 [2024-12-09 09:52:01.913997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.689 [2024-12-09 09:52:01.945500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.689 [2024-12-09 09:52:01.984063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58809 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58809 /var/tmp/spdk2.sock 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58809 ']' 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:54.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:54.689 09:52:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:54.689 [2024-12-09 09:52:02.181151] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:54.689 [2024-12-09 09:52:02.181249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58809 ] 00:39:54.948 [2024-12-09 09:52:02.347436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.948 [2024-12-09 09:52:02.418653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.209 [2024-12-09 09:52:02.502553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:55.785 09:52:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:55.785 09:52:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:39:55.785 09:52:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58809 00:39:55.785 09:52:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:39:55.785 09:52:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58809 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58806 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58806 ']' 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58806 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58806 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:56.732 killing process with pid 58806 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58806' 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58806 00:39:56.732 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58806 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58809 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58809 ']' 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58809 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58809 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:57.305 killing process with pid 58809 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58809' 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58809 00:39:57.305 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58809 00:39:57.563 00:39:57.563 real 0m3.228s 00:39:57.563 user 0m3.780s 00:39:57.563 sys 0m0.896s 00:39:57.563 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:57.563 09:52:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:57.563 ************************************ 00:39:57.563 END TEST locking_app_on_unlocked_coremask 00:39:57.563 ************************************ 00:39:57.563 09:52:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:39:57.563 09:52:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:57.563 09:52:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:57.563 09:52:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:57.563 ************************************ 00:39:57.563 START TEST locking_app_on_locked_coremask 00:39:57.563 ************************************ 00:39:57.563 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:39:57.563 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58876 00:39:57.563 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58876 /var/tmp/spdk.sock 00:39:57.563 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58876 ']' 00:39:57.563 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:39:57.563 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:57.563 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:57.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:57.564 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:57.564 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:57.564 09:52:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:57.564 [2024-12-09 09:52:05.049399] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:57.564 [2024-12-09 09:52:05.049510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58876 ] 00:39:57.822 [2024-12-09 09:52:05.199767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:57.822 [2024-12-09 09:52:05.229545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.822 [2024-12-09 09:52:05.266784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58879 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58879 /var/tmp/spdk2.sock 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58879 /var/tmp/spdk2.sock 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:39:58.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58879 /var/tmp/spdk2.sock 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58879 ']' 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:58.081 09:52:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:58.081 [2024-12-09 09:52:05.462522] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:58.081 [2024-12-09 09:52:05.462640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 00:39:58.339 [2024-12-09 09:52:05.634190] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58876 has claimed it. 00:39:58.339 [2024-12-09 09:52:05.634265] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:39:58.905 ERROR: process (pid: 58879) is no longer running 00:39:58.905 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58879) - No such process 00:39:58.905 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:58.905 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:39:58.905 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:39:58.905 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:58.905 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:58.905 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:58.905 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58876 00:39:58.905 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58876 00:39:58.905 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:39:59.163 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58876 00:39:59.163 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58876 ']' 00:39:59.163 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58876 00:39:59.163 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:39:59.163 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:59.163 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58876 00:39:59.163 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:59.164 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:59.164 killing process with pid 58876 00:39:59.164 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58876' 00:39:59.164 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58876 00:39:59.164 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58876 00:39:59.730 00:39:59.730 real 0m1.951s 00:39:59.730 user 0m2.329s 00:39:59.730 sys 0m0.505s 00:39:59.730 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:59.730 09:52:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:59.730 ************************************ 00:39:59.730 END TEST locking_app_on_locked_coremask 00:39:59.730 ************************************ 00:39:59.730 09:52:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:39:59.730 09:52:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:59.730 09:52:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:59.730 09:52:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:39:59.730 ************************************ 00:39:59.730 START TEST locking_overlapped_coremask 00:39:59.730 ************************************ 00:39:59.730 09:52:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:39:59.730 09:52:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58930 00:39:59.730 09:52:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58930 /var/tmp/spdk.sock 00:39:59.730 09:52:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:39:59.730 09:52:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58930 ']' 00:39:59.730 09:52:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:59.730 09:52:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:59.731 09:52:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:59.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:59.731 09:52:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:59.731 09:52:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:59.731 [2024-12-09 09:52:07.053176] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:59.731 [2024-12-09 09:52:07.053282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58930 ] 00:39:59.731 [2024-12-09 09:52:07.202901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:59.989 [2024-12-09 09:52:07.235872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.989 [2024-12-09 09:52:07.236046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:59.989 [2024-12-09 09:52:07.236053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.989 [2024-12-09 09:52:07.275097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58935 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58935 /var/tmp/spdk2.sock 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58935 /var/tmp/spdk2.sock 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58935 /var/tmp/spdk2.sock 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58935 ']' 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:59.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:59.989 09:52:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:39:59.989 [2024-12-09 09:52:07.452024] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:39:59.989 [2024-12-09 09:52:07.452113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58935 ] 00:40:00.247 [2024-12-09 09:52:07.619203] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58930 has claimed it. 00:40:00.247 [2024-12-09 09:52:07.619265] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:40:00.815 ERROR: process (pid: 58935) is no longer running 00:40:00.815 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58935) - No such process 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58930 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58930 ']' 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58930 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58930 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:00.815 killing process with pid 58930 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58930' 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58930 00:40:00.815 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58930 00:40:01.074 00:40:01.074 real 0m1.532s 00:40:01.074 user 0m4.158s 00:40:01.074 sys 0m0.300s 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:40:01.074 ************************************ 00:40:01.074 END TEST locking_overlapped_coremask 00:40:01.074 ************************************ 00:40:01.074 09:52:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:40:01.074 09:52:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:01.074 09:52:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:01.074 09:52:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:40:01.074 ************************************ 00:40:01.074 START TEST locking_overlapped_coremask_via_rpc 00:40:01.074 ************************************ 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58981 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58981 /var/tmp/spdk.sock 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58981 ']' 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:01.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:01.074 09:52:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:01.333 [2024-12-09 09:52:08.629621] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:01.333 [2024-12-09 09:52:08.629723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58981 ] 00:40:01.333 [2024-12-09 09:52:08.777079] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:40:01.333 [2024-12-09 09:52:08.777112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:01.333 [2024-12-09 09:52:08.811143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:01.333 [2024-12-09 09:52:08.811224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:01.333 [2024-12-09 09:52:08.811228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.619 [2024-12-09 09:52:08.851292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58999 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58999 /var/tmp/spdk2.sock 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58999 ']' 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:40:02.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:02.186 09:52:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:02.186 [2024-12-09 09:52:09.672417] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:02.186 [2024-12-09 09:52:09.672494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58999 ] 00:40:02.445 [2024-12-09 09:52:09.836407] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:40:02.445 [2024-12-09 09:52:09.838996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:02.445 [2024-12-09 09:52:09.908659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:02.445 [2024-12-09 09:52:09.908782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:02.445 [2024-12-09 09:52:09.908783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:02.703 [2024-12-09 09:52:09.988484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:03.641 [2024-12-09 09:52:10.838418] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58981 has claimed it. 00:40:03.641 request: 00:40:03.641 { 00:40:03.641 "method": "framework_enable_cpumask_locks", 00:40:03.641 "req_id": 1 00:40:03.641 } 00:40:03.641 Got JSON-RPC error response 00:40:03.641 response: 00:40:03.641 { 00:40:03.641 "code": -32603, 00:40:03.641 "message": "Failed to claim CPU core: 2" 00:40:03.641 } 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58981 /var/tmp/spdk.sock 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58981 ']' 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:03.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:03.641 09:52:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:03.901 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:03.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:40:03.901 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:40:03.901 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58999 /var/tmp/spdk2.sock 00:40:03.901 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58999 ']' 00:40:03.901 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:40:03.901 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:03.901 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:40:03.901 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:03.901 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:04.159 ************************************ 00:40:04.159 END TEST locking_overlapped_coremask_via_rpc 00:40:04.159 ************************************ 00:40:04.159 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:04.159 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:40:04.159 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:40:04.159 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:40:04.159 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:40:04.159 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:40:04.159 00:40:04.159 real 0m2.983s 00:40:04.159 user 0m1.704s 00:40:04.159 sys 0m0.195s 00:40:04.159 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:04.159 09:52:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:04.159 09:52:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:40:04.159 09:52:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58981 ]] 00:40:04.159 09:52:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58981 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58981 ']' 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58981 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58981 00:40:04.159 killing process with pid 58981 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58981' 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58981 00:40:04.159 09:52:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58981 00:40:04.417 09:52:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58999 ]] 00:40:04.417 09:52:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58999 00:40:04.417 09:52:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58999 ']' 00:40:04.417 09:52:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58999 00:40:04.417 09:52:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:40:04.417 09:52:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.417 09:52:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58999 00:40:04.674 killing process with pid 58999 00:40:04.674 09:52:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:40:04.674 09:52:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:40:04.674 09:52:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58999' 00:40:04.674 09:52:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58999 00:40:04.674 09:52:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58999 00:40:04.932 09:52:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:40:04.932 09:52:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:40:04.932 09:52:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58981 ]] 00:40:04.932 09:52:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58981 00:40:04.932 09:52:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58981 ']' 00:40:04.932 09:52:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58981 00:40:04.932 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58981) - No such process 00:40:04.932 09:52:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58981 is not found' 00:40:04.932 Process with pid 58981 is not found 00:40:04.932 09:52:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58999 ]] 00:40:04.932 09:52:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58999 00:40:04.932 09:52:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58999 ']' 00:40:04.932 09:52:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58999 00:40:04.932 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58999) - No such process 00:40:04.932 Process with pid 58999 is not found 00:40:04.932 09:52:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58999 is not found' 00:40:04.932 09:52:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:40:04.932 00:40:04.932 real 0m17.130s 00:40:04.932 user 0m33.018s 00:40:04.932 sys 0m4.318s 00:40:04.932 09:52:12 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:04.932 09:52:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:40:04.932 ************************************ 00:40:04.932 END TEST cpu_locks 00:40:04.932 ************************************ 00:40:04.932 00:40:04.932 real 0m42.662s 00:40:04.932 user 1m26.391s 00:40:04.932 sys 0m7.731s 00:40:04.932 09:52:12 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:04.932 09:52:12 event -- common/autotest_common.sh@10 -- # set +x 00:40:04.932 ************************************ 00:40:04.932 END TEST event 00:40:04.932 ************************************ 00:40:04.932 09:52:12 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:40:04.932 09:52:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:04.932 09:52:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:04.932 09:52:12 -- common/autotest_common.sh@10 -- # set +x 00:40:04.932 ************************************ 00:40:04.932 START TEST thread 00:40:04.932 ************************************ 00:40:04.932 09:52:12 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:40:04.932 * Looking for test storage... 00:40:04.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:40:04.932 09:52:12 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:04.932 09:52:12 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:40:04.932 09:52:12 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:05.191 09:52:12 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:05.191 09:52:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:05.191 09:52:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:05.191 09:52:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:05.191 09:52:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:40:05.191 09:52:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:40:05.191 09:52:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:40:05.191 09:52:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:40:05.191 09:52:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:40:05.191 09:52:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:40:05.191 09:52:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:40:05.191 09:52:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:05.191 09:52:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:40:05.191 09:52:12 thread -- scripts/common.sh@345 -- # : 1 00:40:05.191 09:52:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:05.191 09:52:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:05.191 09:52:12 thread -- scripts/common.sh@365 -- # decimal 1 00:40:05.191 09:52:12 thread -- scripts/common.sh@353 -- # local d=1 00:40:05.191 09:52:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:05.191 09:52:12 thread -- scripts/common.sh@355 -- # echo 1 00:40:05.191 09:52:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:40:05.191 09:52:12 thread -- scripts/common.sh@366 -- # decimal 2 00:40:05.191 09:52:12 thread -- scripts/common.sh@353 -- # local d=2 00:40:05.191 09:52:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:05.191 09:52:12 thread -- scripts/common.sh@355 -- # echo 2 00:40:05.191 09:52:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:40:05.191 09:52:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:05.191 09:52:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:05.191 09:52:12 thread -- scripts/common.sh@368 -- # return 0 00:40:05.191 09:52:12 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:05.191 09:52:12 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.191 --rc genhtml_branch_coverage=1 00:40:05.191 --rc genhtml_function_coverage=1 00:40:05.191 --rc genhtml_legend=1 00:40:05.191 --rc geninfo_all_blocks=1 00:40:05.191 --rc geninfo_unexecuted_blocks=1 00:40:05.191 00:40:05.191 ' 00:40:05.191 09:52:12 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.191 --rc genhtml_branch_coverage=1 00:40:05.191 --rc genhtml_function_coverage=1 00:40:05.191 --rc genhtml_legend=1 00:40:05.191 --rc geninfo_all_blocks=1 00:40:05.191 --rc geninfo_unexecuted_blocks=1 00:40:05.191 00:40:05.191 ' 00:40:05.191 09:52:12 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.191 --rc genhtml_branch_coverage=1 00:40:05.191 --rc genhtml_function_coverage=1 00:40:05.191 --rc genhtml_legend=1 00:40:05.191 --rc geninfo_all_blocks=1 00:40:05.191 --rc geninfo_unexecuted_blocks=1 00:40:05.191 00:40:05.191 ' 00:40:05.191 09:52:12 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.191 --rc genhtml_branch_coverage=1 00:40:05.191 --rc genhtml_function_coverage=1 00:40:05.191 --rc genhtml_legend=1 00:40:05.191 --rc geninfo_all_blocks=1 00:40:05.191 --rc geninfo_unexecuted_blocks=1 00:40:05.191 00:40:05.191 ' 00:40:05.191 09:52:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:40:05.191 09:52:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:40:05.191 09:52:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:05.191 09:52:12 thread -- common/autotest_common.sh@10 -- # set +x 00:40:05.191 ************************************ 00:40:05.191 START TEST thread_poller_perf 00:40:05.191 ************************************ 00:40:05.191 09:52:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:40:05.191 [2024-12-09 09:52:12.521440] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:05.191 [2024-12-09 09:52:12.521537] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59129 ] 00:40:05.191 [2024-12-09 09:52:12.666725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:05.450 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:40:05.450 [2024-12-09 09:52:12.698561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.384 [2024-12-09T09:52:13.886Z] ====================================== 00:40:06.384 [2024-12-09T09:52:13.886Z] busy:2207420904 (cyc) 00:40:06.384 [2024-12-09T09:52:13.886Z] total_run_count: 306000 00:40:06.384 [2024-12-09T09:52:13.886Z] tsc_hz: 2200000000 (cyc) 00:40:06.384 [2024-12-09T09:52:13.886Z] ====================================== 00:40:06.384 [2024-12-09T09:52:13.886Z] poller_cost: 7213 (cyc), 3278 (nsec) 00:40:06.384 00:40:06.384 real 0m1.293s 00:40:06.384 user 0m1.152s 00:40:06.384 sys 0m0.036s 00:40:06.384 09:52:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:06.384 09:52:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:40:06.384 ************************************ 00:40:06.384 END TEST thread_poller_perf 00:40:06.384 ************************************ 00:40:06.384 09:52:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:40:06.384 09:52:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:40:06.384 09:52:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:06.384 09:52:13 thread -- common/autotest_common.sh@10 -- # set +x 00:40:06.384 ************************************ 00:40:06.384 START TEST thread_poller_perf 00:40:06.384 ************************************ 00:40:06.384 09:52:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:40:06.384 [2024-12-09 09:52:13.865220] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:06.384 [2024-12-09 09:52:13.865691] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59165 ] 00:40:06.642 [2024-12-09 09:52:14.015254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.642 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:40:06.642 [2024-12-09 09:52:14.048624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.016 [2024-12-09T09:52:15.518Z] ====================================== 00:40:08.016 [2024-12-09T09:52:15.518Z] busy:2202006601 (cyc) 00:40:08.016 [2024-12-09T09:52:15.518Z] total_run_count: 3552000 00:40:08.016 [2024-12-09T09:52:15.518Z] tsc_hz: 2200000000 (cyc) 00:40:08.016 [2024-12-09T09:52:15.518Z] ====================================== 00:40:08.016 [2024-12-09T09:52:15.518Z] poller_cost: 619 (cyc), 281 (nsec) 00:40:08.016 00:40:08.016 real 0m1.288s 00:40:08.016 user 0m1.148s 00:40:08.016 sys 0m0.033s 00:40:08.016 09:52:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:08.016 09:52:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:40:08.016 ************************************ 00:40:08.016 END TEST thread_poller_perf 00:40:08.016 ************************************ 00:40:08.016 09:52:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:40:08.016 00:40:08.016 real 0m2.877s 00:40:08.016 user 0m2.455s 00:40:08.016 sys 0m0.200s 00:40:08.016 09:52:15 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:08.016 ************************************ 00:40:08.016 09:52:15 thread -- common/autotest_common.sh@10 -- # set +x 00:40:08.016 END TEST thread 00:40:08.016 ************************************ 00:40:08.016 09:52:15 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:40:08.016 09:52:15 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:40:08.016 09:52:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:08.016 09:52:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:08.016 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:40:08.016 ************************************ 00:40:08.016 START TEST app_cmdline 00:40:08.016 ************************************ 00:40:08.016 09:52:15 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:40:08.016 * Looking for test storage... 00:40:08.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:40:08.016 09:52:15 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:08.016 09:52:15 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:40:08.016 09:52:15 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:08.016 09:52:15 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:08.017 09:52:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:08.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.017 --rc genhtml_branch_coverage=1 00:40:08.017 --rc genhtml_function_coverage=1 00:40:08.017 --rc genhtml_legend=1 00:40:08.017 --rc geninfo_all_blocks=1 00:40:08.017 --rc geninfo_unexecuted_blocks=1 00:40:08.017 00:40:08.017 ' 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:08.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.017 --rc genhtml_branch_coverage=1 00:40:08.017 --rc genhtml_function_coverage=1 00:40:08.017 --rc genhtml_legend=1 00:40:08.017 --rc geninfo_all_blocks=1 00:40:08.017 --rc geninfo_unexecuted_blocks=1 00:40:08.017 00:40:08.017 ' 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:08.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.017 --rc genhtml_branch_coverage=1 00:40:08.017 --rc genhtml_function_coverage=1 00:40:08.017 --rc genhtml_legend=1 00:40:08.017 --rc geninfo_all_blocks=1 00:40:08.017 --rc geninfo_unexecuted_blocks=1 00:40:08.017 00:40:08.017 ' 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:08.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.017 --rc genhtml_branch_coverage=1 00:40:08.017 --rc genhtml_function_coverage=1 00:40:08.017 --rc genhtml_legend=1 00:40:08.017 --rc geninfo_all_blocks=1 00:40:08.017 --rc geninfo_unexecuted_blocks=1 00:40:08.017 00:40:08.017 ' 00:40:08.017 09:52:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:40:08.017 09:52:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59247 00:40:08.017 09:52:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59247 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59247 ']' 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:08.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:08.017 09:52:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:40:08.017 09:52:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:40:08.017 [2024-12-09 09:52:15.476299] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:08.017 [2024-12-09 09:52:15.476409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59247 ] 00:40:08.275 [2024-12-09 09:52:15.632548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.275 [2024-12-09 09:52:15.671684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.275 [2024-12-09 09:52:15.719839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:08.532 09:52:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:08.532 09:52:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:40:08.532 09:52:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:40:08.791 { 00:40:08.791 "version": "SPDK v25.01-pre git sha1 070cd5283", 00:40:08.791 "fields": { 00:40:08.791 "major": 25, 00:40:08.791 "minor": 1, 00:40:08.791 "patch": 0, 00:40:08.791 "suffix": "-pre", 00:40:08.791 "commit": "070cd5283" 00:40:08.791 } 00:40:08.791 } 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:40:08.791 09:52:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:08.791 09:52:16 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:40:09.049 request: 00:40:09.049 { 00:40:09.049 "method": "env_dpdk_get_mem_stats", 00:40:09.049 "req_id": 1 00:40:09.049 } 00:40:09.049 Got JSON-RPC error response 00:40:09.049 response: 00:40:09.049 { 00:40:09.049 "code": -32601, 00:40:09.049 "message": "Method not found" 00:40:09.049 } 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:09.049 09:52:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59247 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59247 ']' 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59247 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59247 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:09.049 killing process with pid 59247 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59247' 00:40:09.049 09:52:16 app_cmdline -- common/autotest_common.sh@973 -- # kill 59247 00:40:09.307 09:52:16 app_cmdline -- common/autotest_common.sh@978 -- # wait 59247 00:40:09.566 00:40:09.566 real 0m1.615s 00:40:09.566 user 0m2.103s 00:40:09.566 sys 0m0.395s 00:40:09.566 09:52:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.566 09:52:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:40:09.566 ************************************ 00:40:09.566 END TEST app_cmdline 00:40:09.566 ************************************ 00:40:09.566 09:52:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:40:09.566 09:52:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:09.566 09:52:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:09.566 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:40:09.566 ************************************ 00:40:09.566 START TEST version 00:40:09.566 ************************************ 00:40:09.566 09:52:16 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:40:09.566 * Looking for test storage... 00:40:09.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:40:09.566 09:52:16 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:09.566 09:52:16 version -- common/autotest_common.sh@1711 -- # lcov --version 00:40:09.566 09:52:16 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:09.825 09:52:17 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:09.825 09:52:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:09.825 09:52:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:09.825 09:52:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:09.825 09:52:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:40:09.825 09:52:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:40:09.825 09:52:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:40:09.825 09:52:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:40:09.825 09:52:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:40:09.825 09:52:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:40:09.825 09:52:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:40:09.825 09:52:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:09.825 09:52:17 version -- scripts/common.sh@344 -- # case "$op" in 00:40:09.825 09:52:17 version -- scripts/common.sh@345 -- # : 1 00:40:09.825 09:52:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:09.825 09:52:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:09.825 09:52:17 version -- scripts/common.sh@365 -- # decimal 1 00:40:09.825 09:52:17 version -- scripts/common.sh@353 -- # local d=1 00:40:09.825 09:52:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:09.825 09:52:17 version -- scripts/common.sh@355 -- # echo 1 00:40:09.825 09:52:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:40:09.825 09:52:17 version -- scripts/common.sh@366 -- # decimal 2 00:40:09.825 09:52:17 version -- scripts/common.sh@353 -- # local d=2 00:40:09.825 09:52:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:09.825 09:52:17 version -- scripts/common.sh@355 -- # echo 2 00:40:09.825 09:52:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:40:09.825 09:52:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:09.825 09:52:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:09.825 09:52:17 version -- scripts/common.sh@368 -- # return 0 00:40:09.825 09:52:17 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:09.825 09:52:17 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.825 --rc genhtml_branch_coverage=1 00:40:09.825 --rc genhtml_function_coverage=1 00:40:09.825 --rc genhtml_legend=1 00:40:09.825 --rc geninfo_all_blocks=1 00:40:09.825 --rc geninfo_unexecuted_blocks=1 00:40:09.825 00:40:09.825 ' 00:40:09.825 09:52:17 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.825 --rc genhtml_branch_coverage=1 00:40:09.825 --rc genhtml_function_coverage=1 00:40:09.825 --rc genhtml_legend=1 00:40:09.825 --rc geninfo_all_blocks=1 00:40:09.825 --rc geninfo_unexecuted_blocks=1 00:40:09.825 00:40:09.825 ' 00:40:09.825 09:52:17 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.825 --rc genhtml_branch_coverage=1 00:40:09.825 --rc genhtml_function_coverage=1 00:40:09.825 --rc genhtml_legend=1 00:40:09.825 --rc geninfo_all_blocks=1 00:40:09.825 --rc geninfo_unexecuted_blocks=1 00:40:09.825 00:40:09.825 ' 00:40:09.825 09:52:17 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.825 --rc genhtml_branch_coverage=1 00:40:09.825 --rc genhtml_function_coverage=1 00:40:09.825 --rc genhtml_legend=1 00:40:09.825 --rc geninfo_all_blocks=1 00:40:09.825 --rc geninfo_unexecuted_blocks=1 00:40:09.825 00:40:09.825 ' 00:40:09.825 09:52:17 version -- app/version.sh@17 -- # get_header_version major 00:40:09.826 09:52:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:40:09.826 09:52:17 version -- app/version.sh@14 -- # cut -f2 00:40:09.826 09:52:17 version -- app/version.sh@14 -- # tr -d '"' 00:40:09.826 09:52:17 version -- app/version.sh@17 -- # major=25 00:40:09.826 09:52:17 version -- app/version.sh@18 -- # get_header_version minor 00:40:09.826 09:52:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:40:09.826 09:52:17 version -- app/version.sh@14 -- # cut -f2 00:40:09.826 09:52:17 version -- app/version.sh@14 -- # tr -d '"' 00:40:09.826 09:52:17 version -- app/version.sh@18 -- # minor=1 00:40:09.826 09:52:17 version -- app/version.sh@19 -- # get_header_version patch 00:40:09.826 09:52:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:40:09.826 09:52:17 version -- app/version.sh@14 -- # cut -f2 00:40:09.826 09:52:17 version -- app/version.sh@14 -- # tr -d '"' 00:40:09.826 09:52:17 version -- app/version.sh@19 -- # patch=0 00:40:09.826 09:52:17 version -- app/version.sh@20 -- # get_header_version suffix 00:40:09.826 09:52:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:40:09.826 09:52:17 version -- app/version.sh@14 -- # cut -f2 00:40:09.826 09:52:17 version -- app/version.sh@14 -- # tr -d '"' 00:40:09.826 09:52:17 version -- app/version.sh@20 -- # suffix=-pre 00:40:09.826 09:52:17 version -- app/version.sh@22 -- # version=25.1 00:40:09.826 09:52:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:40:09.826 09:52:17 version -- app/version.sh@28 -- # version=25.1rc0 00:40:09.826 09:52:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:40:09.826 09:52:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:40:09.826 09:52:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:40:09.826 09:52:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:40:09.826 00:40:09.826 real 0m0.263s 00:40:09.826 user 0m0.173s 00:40:09.826 sys 0m0.126s 00:40:09.826 09:52:17 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.826 09:52:17 version -- common/autotest_common.sh@10 -- # set +x 00:40:09.826 ************************************ 00:40:09.826 END TEST version 00:40:09.826 ************************************ 00:40:09.826 09:52:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:40:09.826 09:52:17 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:40:09.826 09:52:17 -- spdk/autotest.sh@194 -- # uname -s 00:40:09.826 09:52:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:40:09.826 09:52:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:40:09.826 09:52:17 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:40:09.826 09:52:17 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:40:09.826 09:52:17 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:40:09.826 09:52:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:09.826 09:52:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:09.826 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:40:09.826 ************************************ 00:40:09.826 START TEST spdk_dd 00:40:09.826 ************************************ 00:40:09.826 09:52:17 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:40:09.826 * Looking for test storage... 00:40:09.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:09.826 09:52:17 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:09.826 09:52:17 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:40:09.826 09:52:17 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:10.085 09:52:17 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@345 -- # : 1 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@368 -- # return 0 00:40:10.085 09:52:17 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:10.085 09:52:17 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:10.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.085 --rc genhtml_branch_coverage=1 00:40:10.085 --rc genhtml_function_coverage=1 00:40:10.085 --rc genhtml_legend=1 00:40:10.085 --rc geninfo_all_blocks=1 00:40:10.085 --rc geninfo_unexecuted_blocks=1 00:40:10.085 00:40:10.085 ' 00:40:10.085 09:52:17 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:10.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.085 --rc genhtml_branch_coverage=1 00:40:10.085 --rc genhtml_function_coverage=1 00:40:10.085 --rc genhtml_legend=1 00:40:10.085 --rc geninfo_all_blocks=1 00:40:10.085 --rc geninfo_unexecuted_blocks=1 00:40:10.085 00:40:10.085 ' 00:40:10.085 09:52:17 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:10.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.085 --rc genhtml_branch_coverage=1 00:40:10.085 --rc genhtml_function_coverage=1 00:40:10.085 --rc genhtml_legend=1 00:40:10.085 --rc geninfo_all_blocks=1 00:40:10.085 --rc geninfo_unexecuted_blocks=1 00:40:10.085 00:40:10.085 ' 00:40:10.085 09:52:17 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:10.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.085 --rc genhtml_branch_coverage=1 00:40:10.085 --rc genhtml_function_coverage=1 00:40:10.085 --rc genhtml_legend=1 00:40:10.085 --rc geninfo_all_blocks=1 00:40:10.085 --rc geninfo_unexecuted_blocks=1 00:40:10.085 00:40:10.085 ' 00:40:10.085 09:52:17 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:10.085 09:52:17 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:10.085 09:52:17 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.085 09:52:17 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.085 09:52:17 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.085 09:52:17 spdk_dd -- paths/export.sh@5 -- # export PATH 00:40:10.085 09:52:17 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.085 09:52:17 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:10.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:10.345 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:40:10.345 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:40:10.345 09:52:17 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:40:10.345 09:52:17 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@233 -- # local class 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@235 -- # local progif 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@236 -- # class=01 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@18 -- # local i 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@27 -- # return 0 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@18 -- # local i 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@27 -- # return 0 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:40:10.345 09:52:17 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:40:10.345 09:52:17 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@139 -- # local lib 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.345 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.346 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:40:10.605 * spdk_dd linked to liburing 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:40:10.605 09:52:17 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:40:10.605 09:52:17 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:40:10.606 09:52:17 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:40:10.606 09:52:17 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:40:10.606 09:52:17 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:40:10.606 09:52:17 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:40:10.606 09:52:17 spdk_dd -- dd/common.sh@153 -- # return 0 00:40:10.606 09:52:17 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:40:10.606 09:52:17 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:40:10.606 09:52:17 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:10.606 09:52:17 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:10.606 09:52:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:40:10.606 ************************************ 00:40:10.606 START TEST spdk_dd_basic_rw 00:40:10.606 ************************************ 00:40:10.606 09:52:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:40:10.606 * Looking for test storage... 00:40:10.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:10.606 09:52:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:10.606 09:52:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:10.606 09:52:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:10.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.606 --rc genhtml_branch_coverage=1 00:40:10.606 --rc genhtml_function_coverage=1 00:40:10.606 --rc genhtml_legend=1 00:40:10.606 --rc geninfo_all_blocks=1 00:40:10.606 --rc geninfo_unexecuted_blocks=1 00:40:10.606 00:40:10.606 ' 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:10.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.606 --rc genhtml_branch_coverage=1 00:40:10.606 --rc genhtml_function_coverage=1 00:40:10.606 --rc genhtml_legend=1 00:40:10.606 --rc geninfo_all_blocks=1 00:40:10.606 --rc geninfo_unexecuted_blocks=1 00:40:10.606 00:40:10.606 ' 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:10.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.606 --rc genhtml_branch_coverage=1 00:40:10.606 --rc genhtml_function_coverage=1 00:40:10.606 --rc genhtml_legend=1 00:40:10.606 --rc geninfo_all_blocks=1 00:40:10.606 --rc geninfo_unexecuted_blocks=1 00:40:10.606 00:40:10.606 ' 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:10.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.606 --rc genhtml_branch_coverage=1 00:40:10.606 --rc genhtml_function_coverage=1 00:40:10.606 --rc genhtml_legend=1 00:40:10.606 --rc geninfo_all_blocks=1 00:40:10.606 --rc geninfo_unexecuted_blocks=1 00:40:10.606 00:40:10.606 ' 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:40:10.606 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:40:10.607 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:40:10.867 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:10.868 ************************************ 00:40:10.868 START TEST dd_bs_lt_native_bs 00:40:10.868 ************************************ 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:10.868 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:40:11.126 [2024-12-09 09:52:18.377508] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:11.126 [2024-12-09 09:52:18.377587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59590 ] 00:40:11.126 { 00:40:11.126 "subsystems": [ 00:40:11.126 { 00:40:11.126 "subsystem": "bdev", 00:40:11.126 "config": [ 00:40:11.126 { 00:40:11.126 "params": { 00:40:11.126 "trtype": "pcie", 00:40:11.126 "traddr": "0000:00:10.0", 00:40:11.126 "name": "Nvme0" 00:40:11.126 }, 00:40:11.126 "method": "bdev_nvme_attach_controller" 00:40:11.126 }, 00:40:11.126 { 00:40:11.126 "method": "bdev_wait_for_examine" 00:40:11.126 } 00:40:11.126 ] 00:40:11.126 } 00:40:11.126 ] 00:40:11.126 } 00:40:11.126 [2024-12-09 09:52:18.532190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.126 [2024-12-09 09:52:18.582252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:11.127 [2024-12-09 09:52:18.617057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:11.385 [2024-12-09 09:52:18.713599] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:40:11.385 [2024-12-09 09:52:18.713676] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:11.385 [2024-12-09 09:52:18.787483] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:11.648 00:40:11.648 real 0m0.569s 00:40:11.648 user 0m0.423s 00:40:11.648 sys 0m0.110s 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.648 ************************************ 00:40:11.648 END TEST dd_bs_lt_native_bs 00:40:11.648 ************************************ 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:11.648 ************************************ 00:40:11.648 START TEST dd_rw 00:40:11.648 ************************************ 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:11.648 09:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:12.243 09:52:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:40:12.243 09:52:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:12.243 09:52:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:12.243 09:52:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:12.243 { 00:40:12.243 "subsystems": [ 00:40:12.243 { 00:40:12.243 "subsystem": "bdev", 00:40:12.243 "config": [ 00:40:12.243 { 00:40:12.243 "params": { 00:40:12.243 "trtype": "pcie", 00:40:12.244 "traddr": "0000:00:10.0", 00:40:12.244 "name": "Nvme0" 00:40:12.244 }, 00:40:12.244 "method": "bdev_nvme_attach_controller" 00:40:12.244 }, 00:40:12.244 { 00:40:12.244 "method": "bdev_wait_for_examine" 00:40:12.244 } 00:40:12.244 ] 00:40:12.244 } 00:40:12.244 ] 00:40:12.244 } 00:40:12.244 [2024-12-09 09:52:19.714598] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:12.244 [2024-12-09 09:52:19.714737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59622 ] 00:40:12.511 [2024-12-09 09:52:19.878424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:12.511 [2024-12-09 09:52:19.914397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.511 [2024-12-09 09:52:19.946326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:12.774  [2024-12-09T09:52:20.539Z] Copying: 60/60 [kB] (average 29 MBps) 00:40:13.037 00:40:13.037 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:13.037 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:40:13.037 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:13.037 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:13.037 { 00:40:13.037 "subsystems": [ 00:40:13.037 { 00:40:13.037 "subsystem": "bdev", 00:40:13.037 "config": [ 00:40:13.037 { 00:40:13.037 "params": { 00:40:13.037 "trtype": "pcie", 00:40:13.037 "traddr": "0000:00:10.0", 00:40:13.037 "name": "Nvme0" 00:40:13.037 }, 00:40:13.038 "method": "bdev_nvme_attach_controller" 00:40:13.038 }, 00:40:13.038 { 00:40:13.038 "method": "bdev_wait_for_examine" 00:40:13.038 } 00:40:13.038 ] 00:40:13.038 } 00:40:13.038 ] 00:40:13.038 } 00:40:13.038 [2024-12-09 09:52:20.357272] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:13.038 [2024-12-09 09:52:20.357376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:40:13.038 [2024-12-09 09:52:20.507493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.301 [2024-12-09 09:52:20.547414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.301 [2024-12-09 09:52:20.579464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:13.301  [2024-12-09T09:52:21.066Z] Copying: 60/60 [kB] (average 19 MBps) 00:40:13.564 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:13.564 09:52:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:13.564 { 00:40:13.564 "subsystems": [ 00:40:13.564 { 00:40:13.564 "subsystem": "bdev", 00:40:13.564 "config": [ 00:40:13.564 { 00:40:13.564 "params": { 00:40:13.564 "trtype": "pcie", 00:40:13.564 "traddr": "0000:00:10.0", 00:40:13.564 "name": "Nvme0" 00:40:13.564 }, 00:40:13.564 "method": "bdev_nvme_attach_controller" 00:40:13.564 }, 00:40:13.564 { 00:40:13.564 "method": "bdev_wait_for_examine" 00:40:13.564 } 00:40:13.564 ] 00:40:13.564 } 00:40:13.564 ] 00:40:13.564 } 00:40:13.564 [2024-12-09 09:52:20.933367] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:13.564 [2024-12-09 09:52:20.933482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59657 ] 00:40:13.826 [2024-12-09 09:52:21.085785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.826 [2024-12-09 09:52:21.118263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.826 [2024-12-09 09:52:21.149449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:13.826  [2024-12-09T09:52:21.587Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:14.085 00:40:14.085 09:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:14.085 09:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:40:14.085 09:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:40:14.085 09:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:40:14.085 09:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:40:14.085 09:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:14.085 09:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:14.727 09:52:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:40:14.727 09:52:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:14.727 09:52:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:14.727 09:52:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:14.997 { 00:40:14.997 "subsystems": [ 00:40:14.997 { 00:40:14.997 "subsystem": "bdev", 00:40:14.997 "config": [ 00:40:14.997 { 00:40:14.997 "params": { 00:40:14.997 "trtype": "pcie", 00:40:14.997 "traddr": "0000:00:10.0", 00:40:14.997 "name": "Nvme0" 00:40:14.997 }, 00:40:14.997 "method": "bdev_nvme_attach_controller" 00:40:14.997 }, 00:40:14.997 { 00:40:14.997 "method": "bdev_wait_for_examine" 00:40:14.997 } 00:40:14.997 ] 00:40:14.997 } 00:40:14.997 ] 00:40:14.997 } 00:40:14.997 [2024-12-09 09:52:22.227798] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:14.997 [2024-12-09 09:52:22.227893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59676 ] 00:40:14.997 [2024-12-09 09:52:22.387880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.997 [2024-12-09 09:52:22.427253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:14.997 [2024-12-09 09:52:22.461691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:15.255  [2024-12-09T09:52:22.757Z] Copying: 60/60 [kB] (average 58 MBps) 00:40:15.255 00:40:15.514 09:52:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:15.514 09:52:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:40:15.514 09:52:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:15.514 09:52:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:15.514 [2024-12-09 09:52:22.804259] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:15.514 [2024-12-09 09:52:22.804337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59689 ] 00:40:15.514 { 00:40:15.514 "subsystems": [ 00:40:15.514 { 00:40:15.514 "subsystem": "bdev", 00:40:15.514 "config": [ 00:40:15.514 { 00:40:15.514 "params": { 00:40:15.514 "trtype": "pcie", 00:40:15.514 "traddr": "0000:00:10.0", 00:40:15.514 "name": "Nvme0" 00:40:15.514 }, 00:40:15.514 "method": "bdev_nvme_attach_controller" 00:40:15.514 }, 00:40:15.514 { 00:40:15.514 "method": "bdev_wait_for_examine" 00:40:15.514 } 00:40:15.514 ] 00:40:15.514 } 00:40:15.514 ] 00:40:15.514 } 00:40:15.514 [2024-12-09 09:52:22.954758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.515 [2024-12-09 09:52:22.988556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.773 [2024-12-09 09:52:23.019409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:15.773  [2024-12-09T09:52:23.533Z] Copying: 60/60 [kB] (average 58 MBps) 00:40:16.031 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:16.031 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:16.031 [2024-12-09 09:52:23.348669] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:16.031 [2024-12-09 09:52:23.348776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59705 ] 00:40:16.031 { 00:40:16.031 "subsystems": [ 00:40:16.031 { 00:40:16.031 "subsystem": "bdev", 00:40:16.031 "config": [ 00:40:16.031 { 00:40:16.031 "params": { 00:40:16.031 "trtype": "pcie", 00:40:16.031 "traddr": "0000:00:10.0", 00:40:16.031 "name": "Nvme0" 00:40:16.031 }, 00:40:16.031 "method": "bdev_nvme_attach_controller" 00:40:16.031 }, 00:40:16.031 { 00:40:16.031 "method": "bdev_wait_for_examine" 00:40:16.031 } 00:40:16.031 ] 00:40:16.031 } 00:40:16.031 ] 00:40:16.031 } 00:40:16.031 [2024-12-09 09:52:23.501866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.289 [2024-12-09 09:52:23.541938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.289 [2024-12-09 09:52:23.576181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:16.289  [2024-12-09T09:52:24.050Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:16.548 00:40:16.548 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:16.548 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:16.548 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:40:16.548 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:40:16.548 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:40:16.548 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:40:16.548 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:16.548 09:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:17.115 09:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:40:17.115 09:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:17.115 09:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:17.115 09:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:17.115 { 00:40:17.115 "subsystems": [ 00:40:17.115 { 00:40:17.115 "subsystem": "bdev", 00:40:17.115 "config": [ 00:40:17.115 { 00:40:17.115 "params": { 00:40:17.115 "trtype": "pcie", 00:40:17.115 "traddr": "0000:00:10.0", 00:40:17.115 "name": "Nvme0" 00:40:17.115 }, 00:40:17.115 "method": "bdev_nvme_attach_controller" 00:40:17.115 }, 00:40:17.115 { 00:40:17.115 "method": "bdev_wait_for_examine" 00:40:17.115 } 00:40:17.115 ] 00:40:17.115 } 00:40:17.115 ] 00:40:17.115 } 00:40:17.115 [2024-12-09 09:52:24.515417] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:17.115 [2024-12-09 09:52:24.516060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59724 ] 00:40:17.373 [2024-12-09 09:52:24.717635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.373 [2024-12-09 09:52:24.750175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:17.373 [2024-12-09 09:52:24.779363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:17.373  [2024-12-09T09:52:25.135Z] Copying: 56/56 [kB] (average 54 MBps) 00:40:17.633 00:40:17.633 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:17.633 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:40:17.633 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:17.633 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:17.633 { 00:40:17.633 "subsystems": [ 00:40:17.633 { 00:40:17.633 "subsystem": "bdev", 00:40:17.633 "config": [ 00:40:17.633 { 00:40:17.633 "params": { 00:40:17.633 "trtype": "pcie", 00:40:17.633 "traddr": "0000:00:10.0", 00:40:17.633 "name": "Nvme0" 00:40:17.633 }, 00:40:17.633 "method": "bdev_nvme_attach_controller" 00:40:17.633 }, 00:40:17.633 { 00:40:17.633 "method": "bdev_wait_for_examine" 00:40:17.633 } 00:40:17.633 ] 00:40:17.633 } 00:40:17.633 ] 00:40:17.633 } 00:40:17.633 [2024-12-09 09:52:25.109154] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:17.633 [2024-12-09 09:52:25.109260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59743 ] 00:40:17.891 [2024-12-09 09:52:25.266891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.891 [2024-12-09 09:52:25.305231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:17.891 [2024-12-09 09:52:25.337946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:18.160  [2024-12-09T09:52:25.662Z] Copying: 56/56 [kB] (average 27 MBps) 00:40:18.160 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:18.160 09:52:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:18.421 [2024-12-09 09:52:25.675424] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:18.421 [2024-12-09 09:52:25.675511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59754 ] 00:40:18.421 { 00:40:18.421 "subsystems": [ 00:40:18.421 { 00:40:18.421 "subsystem": "bdev", 00:40:18.421 "config": [ 00:40:18.421 { 00:40:18.421 "params": { 00:40:18.421 "trtype": "pcie", 00:40:18.421 "traddr": "0000:00:10.0", 00:40:18.421 "name": "Nvme0" 00:40:18.421 }, 00:40:18.421 "method": "bdev_nvme_attach_controller" 00:40:18.421 }, 00:40:18.421 { 00:40:18.421 "method": "bdev_wait_for_examine" 00:40:18.421 } 00:40:18.421 ] 00:40:18.421 } 00:40:18.421 ] 00:40:18.421 } 00:40:18.421 [2024-12-09 09:52:25.822916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:18.421 [2024-12-09 09:52:25.856496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.421 [2024-12-09 09:52:25.887500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:18.682  [2024-12-09T09:52:26.184Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:18.682 00:40:18.682 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:18.682 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:40:18.682 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:40:18.682 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:40:18.682 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:40:18.682 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:18.682 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:19.619 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:40:19.619 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:19.619 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:19.619 09:52:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:19.619 [2024-12-09 09:52:26.874952] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:19.619 [2024-12-09 09:52:26.875715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59777 ] 00:40:19.619 { 00:40:19.619 "subsystems": [ 00:40:19.619 { 00:40:19.619 "subsystem": "bdev", 00:40:19.619 "config": [ 00:40:19.619 { 00:40:19.619 "params": { 00:40:19.619 "trtype": "pcie", 00:40:19.619 "traddr": "0000:00:10.0", 00:40:19.619 "name": "Nvme0" 00:40:19.619 }, 00:40:19.619 "method": "bdev_nvme_attach_controller" 00:40:19.619 }, 00:40:19.619 { 00:40:19.619 "method": "bdev_wait_for_examine" 00:40:19.619 } 00:40:19.619 ] 00:40:19.619 } 00:40:19.619 ] 00:40:19.619 } 00:40:19.619 [2024-12-09 09:52:27.028059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.619 [2024-12-09 09:52:27.061629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.619 [2024-12-09 09:52:27.092836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:19.878  [2024-12-09T09:52:27.380Z] Copying: 56/56 [kB] (average 54 MBps) 00:40:19.878 00:40:19.878 09:52:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:19.878 09:52:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:40:19.878 09:52:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:19.878 09:52:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:20.138 [2024-12-09 09:52:27.424947] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:20.138 [2024-12-09 09:52:27.425074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59791 ] 00:40:20.138 { 00:40:20.138 "subsystems": [ 00:40:20.138 { 00:40:20.138 "subsystem": "bdev", 00:40:20.138 "config": [ 00:40:20.138 { 00:40:20.138 "params": { 00:40:20.138 "trtype": "pcie", 00:40:20.138 "traddr": "0000:00:10.0", 00:40:20.138 "name": "Nvme0" 00:40:20.138 }, 00:40:20.138 "method": "bdev_nvme_attach_controller" 00:40:20.138 }, 00:40:20.138 { 00:40:20.138 "method": "bdev_wait_for_examine" 00:40:20.138 } 00:40:20.138 ] 00:40:20.138 } 00:40:20.138 ] 00:40:20.138 } 00:40:20.138 [2024-12-09 09:52:27.580194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.138 [2024-12-09 09:52:27.621560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.398 [2024-12-09 09:52:27.692610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:20.398  [2024-12-09T09:52:28.159Z] Copying: 56/56 [kB] (average 54 MBps) 00:40:20.657 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:20.658 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:20.658 { 00:40:20.658 "subsystems": [ 00:40:20.658 { 00:40:20.658 "subsystem": "bdev", 00:40:20.658 "config": [ 00:40:20.658 { 00:40:20.658 "params": { 00:40:20.658 "trtype": "pcie", 00:40:20.658 "traddr": "0000:00:10.0", 00:40:20.658 "name": "Nvme0" 00:40:20.658 }, 00:40:20.658 "method": "bdev_nvme_attach_controller" 00:40:20.658 }, 00:40:20.658 { 00:40:20.658 "method": "bdev_wait_for_examine" 00:40:20.658 } 00:40:20.658 ] 00:40:20.658 } 00:40:20.658 ] 00:40:20.658 } 00:40:20.658 [2024-12-09 09:52:28.144044] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:20.658 [2024-12-09 09:52:28.144155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59812 ] 00:40:20.916 [2024-12-09 09:52:28.299678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.916 [2024-12-09 09:52:28.334506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.916 [2024-12-09 09:52:28.367112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:21.175  [2024-12-09T09:52:28.677Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:21.175 00:40:21.175 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:21.175 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:21.175 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:40:21.175 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:40:21.175 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:40:21.175 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:40:21.175 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:21.175 09:52:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:21.741 09:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:40:21.741 09:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:21.741 09:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:21.741 09:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:21.741 [2024-12-09 09:52:29.201230] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:21.741 [2024-12-09 09:52:29.201464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59831 ] 00:40:21.741 { 00:40:21.741 "subsystems": [ 00:40:21.741 { 00:40:21.741 "subsystem": "bdev", 00:40:21.741 "config": [ 00:40:21.741 { 00:40:21.741 "params": { 00:40:21.741 "trtype": "pcie", 00:40:21.741 "traddr": "0000:00:10.0", 00:40:21.741 "name": "Nvme0" 00:40:21.741 }, 00:40:21.742 "method": "bdev_nvme_attach_controller" 00:40:21.742 }, 00:40:21.742 { 00:40:21.742 "method": "bdev_wait_for_examine" 00:40:21.742 } 00:40:21.742 ] 00:40:21.742 } 00:40:21.742 ] 00:40:21.742 } 00:40:21.999 [2024-12-09 09:52:29.355092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:21.999 [2024-12-09 09:52:29.395101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:21.999 [2024-12-09 09:52:29.429775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:22.257  [2024-12-09T09:52:29.759Z] Copying: 48/48 [kB] (average 46 MBps) 00:40:22.257 00:40:22.257 09:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:22.257 09:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:40:22.257 09:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:22.257 09:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:22.515 [2024-12-09 09:52:29.784299] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:22.515 [2024-12-09 09:52:29.784392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59839 ] 00:40:22.515 { 00:40:22.515 "subsystems": [ 00:40:22.515 { 00:40:22.515 "subsystem": "bdev", 00:40:22.515 "config": [ 00:40:22.515 { 00:40:22.515 "params": { 00:40:22.515 "trtype": "pcie", 00:40:22.515 "traddr": "0000:00:10.0", 00:40:22.515 "name": "Nvme0" 00:40:22.515 }, 00:40:22.515 "method": "bdev_nvme_attach_controller" 00:40:22.515 }, 00:40:22.515 { 00:40:22.515 "method": "bdev_wait_for_examine" 00:40:22.515 } 00:40:22.515 ] 00:40:22.515 } 00:40:22.515 ] 00:40:22.515 } 00:40:22.515 [2024-12-09 09:52:29.940048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.515 [2024-12-09 09:52:29.979553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.515 [2024-12-09 09:52:30.014361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:22.773  [2024-12-09T09:52:30.533Z] Copying: 48/48 [kB] (average 46 MBps) 00:40:23.031 00:40:23.031 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:23.031 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:40:23.031 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:23.031 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:23.031 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:40:23.032 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:23.032 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:23.032 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:23.032 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:23.032 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:23.032 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:23.032 [2024-12-09 09:52:30.388077] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:23.032 [2024-12-09 09:52:30.388412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59860 ] 00:40:23.032 { 00:40:23.032 "subsystems": [ 00:40:23.032 { 00:40:23.032 "subsystem": "bdev", 00:40:23.032 "config": [ 00:40:23.032 { 00:40:23.032 "params": { 00:40:23.032 "trtype": "pcie", 00:40:23.032 "traddr": "0000:00:10.0", 00:40:23.032 "name": "Nvme0" 00:40:23.032 }, 00:40:23.032 "method": "bdev_nvme_attach_controller" 00:40:23.032 }, 00:40:23.032 { 00:40:23.032 "method": "bdev_wait_for_examine" 00:40:23.032 } 00:40:23.032 ] 00:40:23.032 } 00:40:23.032 ] 00:40:23.032 } 00:40:23.290 [2024-12-09 09:52:30.548393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.290 [2024-12-09 09:52:30.590300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.290 [2024-12-09 09:52:30.626107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:23.290  [2024-12-09T09:52:31.050Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:23.548 00:40:23.548 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:23.548 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:40:23.548 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:40:23.548 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:40:23.548 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:40:23.548 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:23.548 09:52:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:24.114 09:52:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:40:24.114 09:52:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:24.114 09:52:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:24.114 09:52:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:24.114 { 00:40:24.114 "subsystems": [ 00:40:24.114 { 00:40:24.114 "subsystem": "bdev", 00:40:24.114 "config": [ 00:40:24.114 { 00:40:24.114 "params": { 00:40:24.114 "trtype": "pcie", 00:40:24.114 "traddr": "0000:00:10.0", 00:40:24.114 "name": "Nvme0" 00:40:24.114 }, 00:40:24.114 "method": "bdev_nvme_attach_controller" 00:40:24.114 }, 00:40:24.114 { 00:40:24.114 "method": "bdev_wait_for_examine" 00:40:24.114 } 00:40:24.114 ] 00:40:24.114 } 00:40:24.114 ] 00:40:24.114 } 00:40:24.114 [2024-12-09 09:52:31.551263] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:24.114 [2024-12-09 09:52:31.551361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59879 ] 00:40:24.372 [2024-12-09 09:52:31.708651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.372 [2024-12-09 09:52:31.750135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.372 [2024-12-09 09:52:31.785636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:24.630  [2024-12-09T09:52:32.132Z] Copying: 48/48 [kB] (average 46 MBps) 00:40:24.630 00:40:24.630 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:24.630 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:40:24.630 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:24.630 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:24.889 { 00:40:24.889 "subsystems": [ 00:40:24.889 { 00:40:24.889 "subsystem": "bdev", 00:40:24.889 "config": [ 00:40:24.889 { 00:40:24.889 "params": { 00:40:24.889 "trtype": "pcie", 00:40:24.889 "traddr": "0000:00:10.0", 00:40:24.889 "name": "Nvme0" 00:40:24.889 }, 00:40:24.889 "method": "bdev_nvme_attach_controller" 00:40:24.889 }, 00:40:24.889 { 00:40:24.889 "method": "bdev_wait_for_examine" 00:40:24.889 } 00:40:24.889 ] 00:40:24.889 } 00:40:24.889 ] 00:40:24.889 } 00:40:24.889 [2024-12-09 09:52:32.141054] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:24.889 [2024-12-09 09:52:32.141158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59898 ] 00:40:24.889 [2024-12-09 09:52:32.299798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.889 [2024-12-09 09:52:32.341929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.889 [2024-12-09 09:52:32.376276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:25.147  [2024-12-09T09:52:32.908Z] Copying: 48/48 [kB] (average 46 MBps) 00:40:25.406 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:25.406 09:52:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:25.406 [2024-12-09 09:52:32.723109] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:25.406 [2024-12-09 09:52:32.723216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59908 ] 00:40:25.406 { 00:40:25.406 "subsystems": [ 00:40:25.406 { 00:40:25.406 "subsystem": "bdev", 00:40:25.406 "config": [ 00:40:25.406 { 00:40:25.406 "params": { 00:40:25.406 "trtype": "pcie", 00:40:25.406 "traddr": "0000:00:10.0", 00:40:25.406 "name": "Nvme0" 00:40:25.406 }, 00:40:25.406 "method": "bdev_nvme_attach_controller" 00:40:25.406 }, 00:40:25.406 { 00:40:25.406 "method": "bdev_wait_for_examine" 00:40:25.406 } 00:40:25.406 ] 00:40:25.406 } 00:40:25.406 ] 00:40:25.406 } 00:40:25.406 [2024-12-09 09:52:32.876066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:25.665 [2024-12-09 09:52:32.910936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.665 [2024-12-09 09:52:32.941767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:25.665  [2024-12-09T09:52:33.426Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:25.924 00:40:25.924 00:40:25.924 real 0m14.296s 00:40:25.924 user 0m11.092s 00:40:25.924 sys 0m3.928s 00:40:25.924 ************************************ 00:40:25.924 END TEST dd_rw 00:40:25.924 ************************************ 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:25.924 ************************************ 00:40:25.924 START TEST dd_rw_offset 00:40:25.924 ************************************ 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:25.924 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:40:25.925 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=8jpg6acuc28gw07lb1samr10b8lkhnpef85dtywiaas505vwvlc7e7ykxxobv1l4qxqt1gxxnjmd76m7g1kty1gofods4r9tcozvqwvtcghjuxo49lxwvhwshz138hqiay3ssbbgf7o9a4huo40vzoltr5cltxiawr8oyowtd4yp8o98rmdlyh2scqxl28hbl72eju4dm65wkbhe0cj673b8svzy8ue8m3u4cvrrr1fub1jx8ggmmq4pyuiv2axbuj87sci7bmbwu3vnejuuhj5aaypk8hpcyf0olfk6wk131xwz9tf5vn2g6vuvhhigqkobwmmk74s8lls5sg5kzh397ytyn588r51gsnp70uz1abhdtk6u9atu43y0j7jzde31cqc4z4pj50coul8cm68ytp9nyiba15uw5wh47hpxj68y2bsh25g9zuz2vqg4bm4huonx7oj2tiq2e3ed9ow0qo3d5193dz1ohb5ip6rdmjyyonm2fv8nt6xz4w8b2wmf166ayusjyfjmobibdi5bta96i7b3qu1gfh7wuhsvmsp8ilpo31e4hvyc7yr00hmlgupx4zrg4w2yn22535ksi22o8ujqc7gzmwe3x9j1ps5w74sd0k9xuufctldrqsvt3dvgk3btdknj2bgife2ufzilvm7i97qbl11yg53fn0el2jz102txeus1isel6inidyvisqyez5cx9haatftgbzhhpzbed4e1ey43jhhbwnu34icflfr9jpj4cysx9rrvmx9q3ix6fpkm31iey9dbsl1bzdfw8bqwiee4j1nacwvmy8x72h7ufkaz7ckhfpw8mhwyp67us4ulgxzt0igjpli2q1apldjqxg3fre9dc175npik5hq2a4hn8v1k8jqr71yv3oup2gyuf4iugf6l5cdm3fycz348nuxfeoezfrh4rc8qrs4er49aih189qwsxbz954gytysi5uzfdqdo502i9g8ni8bc9taqkbez1ttsp8d8v5e96g69ze0hmn419hlwf3s4odprgmx4qi62cei61p4ujh8r4lyhfccr1n3nt27gjtus9aqsmqsgrm9p1zfi5evnp1gzmaztynkmdewyyj6oouad19eopm5sfudfdrkrknqqoznz3fobs2a6su2tx0db79y664csdynv3cxfxhdg5bansq0xb1fvkpad2ka49gaqyubx0zvz6vmnzkgdes8c69mtfkr1m6o2lyzgll08i9uej66itloqz3xkq3a34750fk8pzqi7i7agt0ehwr6dt79fh55y5qh7ycrugzx9a1nvj8hfbbj4dv3g25s99xmyowjew5rm803mj7cn1jkz09a5rgkf3l0pvmoh8gtfhmmu5qfcmfssorfp1zzq3gxxet5kabdpwv9wyyrt17e4p345n2f39ypdoc63oiusge174wnv18weqc87snaxnnael617621slbqz168t7wryq0i08hmjbw0sqh05hukq87nts71unbbu2qhflpgjmha3x2ftgabqkl8qw41tcamqnqhhxnqtwohdhbl7hbju8u0zb4gtu8bossla9garfx7adiz5m0macmyh8jgx2m0t46fxns1kmzpfetu1hc46nm3su6mncrkz3qmo8mtj0wfx8f3vxma0zaifupwkd8jc7jokxyr7epxc9l4f8z4wd0h86xdbozhrq33shwtoesgy4ms54p0vlnivd57nxiibb05sgpp973dt4ecd48jfjsfj6k8p2rq2av05x2ne8l073dl5rvp5ahdwmg5cvvp4zel4jgc5e8zg9bkrk0lk3y1mk8plqog6mzycb1pcxzlfop5nr74ndhe8pdlu2i4yxik5bll4tenoyethv9bkw7mk2uomny364kspldi7ljt74avbi1vtz30d1fzup80j5st0wcqwmalhbtpgk9w84qpfyl78j9zkezebdc3w7q2u6udz9h1o5shgxze2j48c8vi23gd4foukew7wa6aqm7bl90svsoekncraq6s7lrtm69vmkhdgnqvav43atzwlve7m63qc90rcs1whhf88hda8hoyvra3k5oo9quan3uerpy162so8y15stp5ixt1pqg1gn7t0kdqwsec44um6atpk1yb6d7uh9okpihmxjnri96vd3hxq66ldzutaq1l0m46d1819lphuocaeesxxa1rmf5ws3yv1du98kc3ppbh49vy6bqykpanl7vvxmwimsijwwgjebs86w75ds5kzv3ztsk41w2f918d1qjny00lj575xjdsc02lleh3huz36dgrop9ggjjy80b3npxx5dnwswmcdndfcdbz8vo59ias7at1qrdrkt7ern74g14wsbci1asut5zok9pzpc3aiex1w8l4kdcsvrlezx0x6qkhbgkewifgwjam3he7rc1sr144e1ynvcov2uni6qvx4x1zn4m3d25w9igo1nzm8v8531vvm6yf13molv7i9hnafz10tb6ljfxoisp9wzh4xg3k8bpwxe784mz8e5rf65ghhipj0y2p84m19h023rm55gtop207griltdx059lqau2rez2gvo6zy9yigq6nlaflewrg8v0xrl84g48qn352abzbufevr6il57mhdy9n0o2v5iqpp7zetzg97dbz2jxu506agt742oz517x5jzfcctr70ujv7tliku1oobadfwnorze49uakfm3p6u6wbmu6ensehvsymfy7uod3ggqe3sfwsiekid2py1xix3i1s36v4b2rxan6mtp61i2onn3dlyf1xcow28yfrpd7wrykmefvv9jcyg41ekdb05t2b839it4de2r88zsibysjyizi106kug9o0x0igku6o5kdvkvrof06nyijl9r59me8pgh9vh5h3i4bmat3uauww3jwhteofiminqrfn74ozebzrwqsey2lroibibg67638ku8gvj62uvqsj6m7rwpx5xvnm8abnvrbyxlgunh7wtftgy6ia1pto5j2rg8fuhymk02fwdtl7g1kmoj0iozx3u9gix87doljnpdde2y7dl25m2n09qqtf1cftiv5cw604gacy1bykw36iz195v59ug63fd37h0djf1o8voel7mor0zey06z0nw224bz17qb6sz8wyafizrvhutyaz8nd8st2k041d5idbf7dvbptlv36yaqf7q0xtwvkvq5uc67pl5cnfdwtgh2jfh3sm3ss05yi5hf782m9fin4738ppbzbd0ed1482kr5hoj6pt1nooqoj8g3p79xnw027bcsyzwuy2xwztsubuehdh2ctl1z8py8mamqzjgz2iffjo7thqoyi2aycm0s4jbciojb96wiwnei4l8lsghj72z9z9dhvmp996qq07ii5omfsyphlpunjdbjvo4zw5jay2hnj5rknihuter4hgdun4uwc4i8twu1f3ikl5nhkzaftvprmnfwkow9eur7hqi1q3gqcyeqzwc4auoflaao50oydo4esmwartdyumtomtb860wezdoz4dgioso2k87hk1ix10ku9ibjj1sti2a2o3p6wvx1sf000vw67nob0v4k0rq7rh3wvlopoxdwb7350p3j1hh8ndpzwukptdsneevlqarbjuene6x80kg7ow1y84clcg4z70gqx6aa4ellftz6o7sm22r5bevrmhdweablb21dhy7t9olhhmq2jl1rb4aj09ict47jhv2u29lbap7pdr7mvuzrrmnzs39a18ar4gxj7s547gf49hz7m2peroa8aoikuhutwsbz51b9rgji1wejg92ibaqynmyefx2ybm18k42dprr4iyyg5wh644jrmdaqi5yowcwaxp199dvt7swmem2fjwdr96kb373l9u7x2v0ifrcsw7kwiz7ecsyrwrimu552edohr6j4jmpasr0q98cozpzs7t8aahvr8nqglcki9gx62qx9tap22tt1as8r3x6az7ainoyiiyflv7t5it35ljtv7dpvlpk272ox7p783k8tsxb8ibqwqfi80z3hdmascj3rodhigcvpbhp3ukts7ehh0g 00:40:25.925 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:40:25.925 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:40:25.925 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:40:25.925 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:25.925 [2024-12-09 09:52:33.408195] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:25.925 [2024-12-09 09:52:33.408305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59944 ] 00:40:25.925 { 00:40:25.925 "subsystems": [ 00:40:25.925 { 00:40:25.925 "subsystem": "bdev", 00:40:25.925 "config": [ 00:40:25.925 { 00:40:25.925 "params": { 00:40:25.925 "trtype": "pcie", 00:40:25.925 "traddr": "0000:00:10.0", 00:40:25.925 "name": "Nvme0" 00:40:25.925 }, 00:40:25.925 "method": "bdev_nvme_attach_controller" 00:40:25.925 }, 00:40:25.925 { 00:40:25.925 "method": "bdev_wait_for_examine" 00:40:25.925 } 00:40:25.925 ] 00:40:25.925 } 00:40:25.925 ] 00:40:25.925 } 00:40:26.183 [2024-12-09 09:52:33.564975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.183 [2024-12-09 09:52:33.604376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.183 [2024-12-09 09:52:33.638290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:26.441  [2024-12-09T09:52:33.943Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:40:26.441 00:40:26.441 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:40:26.441 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:40:26.441 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:40:26.441 09:52:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:26.699 [2024-12-09 09:52:33.983821] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:26.699 [2024-12-09 09:52:33.984178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59952 ] 00:40:26.699 { 00:40:26.699 "subsystems": [ 00:40:26.699 { 00:40:26.699 "subsystem": "bdev", 00:40:26.699 "config": [ 00:40:26.699 { 00:40:26.699 "params": { 00:40:26.699 "trtype": "pcie", 00:40:26.699 "traddr": "0000:00:10.0", 00:40:26.699 "name": "Nvme0" 00:40:26.699 }, 00:40:26.699 "method": "bdev_nvme_attach_controller" 00:40:26.699 }, 00:40:26.699 { 00:40:26.699 "method": "bdev_wait_for_examine" 00:40:26.699 } 00:40:26.699 ] 00:40:26.699 } 00:40:26.699 ] 00:40:26.699 } 00:40:26.699 [2024-12-09 09:52:34.138477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.699 [2024-12-09 09:52:34.179353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.958 [2024-12-09 09:52:34.214768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:26.958  [2024-12-09T09:52:34.719Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:40:27.217 00:40:27.217 09:52:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:40:27.217 ************************************ 00:40:27.217 END TEST dd_rw_offset 00:40:27.217 ************************************ 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 8jpg6acuc28gw07lb1samr10b8lkhnpef85dtywiaas505vwvlc7e7ykxxobv1l4qxqt1gxxnjmd76m7g1kty1gofods4r9tcozvqwvtcghjuxo49lxwvhwshz138hqiay3ssbbgf7o9a4huo40vzoltr5cltxiawr8oyowtd4yp8o98rmdlyh2scqxl28hbl72eju4dm65wkbhe0cj673b8svzy8ue8m3u4cvrrr1fub1jx8ggmmq4pyuiv2axbuj87sci7bmbwu3vnejuuhj5aaypk8hpcyf0olfk6wk131xwz9tf5vn2g6vuvhhigqkobwmmk74s8lls5sg5kzh397ytyn588r51gsnp70uz1abhdtk6u9atu43y0j7jzde31cqc4z4pj50coul8cm68ytp9nyiba15uw5wh47hpxj68y2bsh25g9zuz2vqg4bm4huonx7oj2tiq2e3ed9ow0qo3d5193dz1ohb5ip6rdmjyyonm2fv8nt6xz4w8b2wmf166ayusjyfjmobibdi5bta96i7b3qu1gfh7wuhsvmsp8ilpo31e4hvyc7yr00hmlgupx4zrg4w2yn22535ksi22o8ujqc7gzmwe3x9j1ps5w74sd0k9xuufctldrqsvt3dvgk3btdknj2bgife2ufzilvm7i97qbl11yg53fn0el2jz102txeus1isel6inidyvisqyez5cx9haatftgbzhhpzbed4e1ey43jhhbwnu34icflfr9jpj4cysx9rrvmx9q3ix6fpkm31iey9dbsl1bzdfw8bqwiee4j1nacwvmy8x72h7ufkaz7ckhfpw8mhwyp67us4ulgxzt0igjpli2q1apldjqxg3fre9dc175npik5hq2a4hn8v1k8jqr71yv3oup2gyuf4iugf6l5cdm3fycz348nuxfeoezfrh4rc8qrs4er49aih189qwsxbz954gytysi5uzfdqdo502i9g8ni8bc9taqkbez1ttsp8d8v5e96g69ze0hmn419hlwf3s4odprgmx4qi62cei61p4ujh8r4lyhfccr1n3nt27gjtus9aqsmqsgrm9p1zfi5evnp1gzmaztynkmdewyyj6oouad19eopm5sfudfdrkrknqqoznz3fobs2a6su2tx0db79y664csdynv3cxfxhdg5bansq0xb1fvkpad2ka49gaqyubx0zvz6vmnzkgdes8c69mtfkr1m6o2lyzgll08i9uej66itloqz3xkq3a34750fk8pzqi7i7agt0ehwr6dt79fh55y5qh7ycrugzx9a1nvj8hfbbj4dv3g25s99xmyowjew5rm803mj7cn1jkz09a5rgkf3l0pvmoh8gtfhmmu5qfcmfssorfp1zzq3gxxet5kabdpwv9wyyrt17e4p345n2f39ypdoc63oiusge174wnv18weqc87snaxnnael617621slbqz168t7wryq0i08hmjbw0sqh05hukq87nts71unbbu2qhflpgjmha3x2ftgabqkl8qw41tcamqnqhhxnqtwohdhbl7hbju8u0zb4gtu8bossla9garfx7adiz5m0macmyh8jgx2m0t46fxns1kmzpfetu1hc46nm3su6mncrkz3qmo8mtj0wfx8f3vxma0zaifupwkd8jc7jokxyr7epxc9l4f8z4wd0h86xdbozhrq33shwtoesgy4ms54p0vlnivd57nxiibb05sgpp973dt4ecd48jfjsfj6k8p2rq2av05x2ne8l073dl5rvp5ahdwmg5cvvp4zel4jgc5e8zg9bkrk0lk3y1mk8plqog6mzycb1pcxzlfop5nr74ndhe8pdlu2i4yxik5bll4tenoyethv9bkw7mk2uomny364kspldi7ljt74avbi1vtz30d1fzup80j5st0wcqwmalhbtpgk9w84qpfyl78j9zkezebdc3w7q2u6udz9h1o5shgxze2j48c8vi23gd4foukew7wa6aqm7bl90svsoekncraq6s7lrtm69vmkhdgnqvav43atzwlve7m63qc90rcs1whhf88hda8hoyvra3k5oo9quan3uerpy162so8y15stp5ixt1pqg1gn7t0kdqwsec44um6atpk1yb6d7uh9okpihmxjnri96vd3hxq66ldzutaq1l0m46d1819lphuocaeesxxa1rmf5ws3yv1du98kc3ppbh49vy6bqykpanl7vvxmwimsijwwgjebs86w75ds5kzv3ztsk41w2f918d1qjny00lj575xjdsc02lleh3huz36dgrop9ggjjy80b3npxx5dnwswmcdndfcdbz8vo59ias7at1qrdrkt7ern74g14wsbci1asut5zok9pzpc3aiex1w8l4kdcsvrlezx0x6qkhbgkewifgwjam3he7rc1sr144e1ynvcov2uni6qvx4x1zn4m3d25w9igo1nzm8v8531vvm6yf13molv7i9hnafz10tb6ljfxoisp9wzh4xg3k8bpwxe784mz8e5rf65ghhipj0y2p84m19h023rm55gtop207griltdx059lqau2rez2gvo6zy9yigq6nlaflewrg8v0xrl84g48qn352abzbufevr6il57mhdy9n0o2v5iqpp7zetzg97dbz2jxu506agt742oz517x5jzfcctr70ujv7tliku1oobadfwnorze49uakfm3p6u6wbmu6ensehvsymfy7uod3ggqe3sfwsiekid2py1xix3i1s36v4b2rxan6mtp61i2onn3dlyf1xcow28yfrpd7wrykmefvv9jcyg41ekdb05t2b839it4de2r88zsibysjyizi106kug9o0x0igku6o5kdvkvrof06nyijl9r59me8pgh9vh5h3i4bmat3uauww3jwhteofiminqrfn74ozebzrwqsey2lroibibg67638ku8gvj62uvqsj6m7rwpx5xvnm8abnvrbyxlgunh7wtftgy6ia1pto5j2rg8fuhymk02fwdtl7g1kmoj0iozx3u9gix87doljnpdde2y7dl25m2n09qqtf1cftiv5cw604gacy1bykw36iz195v59ug63fd37h0djf1o8voel7mor0zey06z0nw224bz17qb6sz8wyafizrvhutyaz8nd8st2k041d5idbf7dvbptlv36yaqf7q0xtwvkvq5uc67pl5cnfdwtgh2jfh3sm3ss05yi5hf782m9fin4738ppbzbd0ed1482kr5hoj6pt1nooqoj8g3p79xnw027bcsyzwuy2xwztsubuehdh2ctl1z8py8mamqzjgz2iffjo7thqoyi2aycm0s4jbciojb96wiwnei4l8lsghj72z9z9dhvmp996qq07ii5omfsyphlpunjdbjvo4zw5jay2hnj5rknihuter4hgdun4uwc4i8twu1f3ikl5nhkzaftvprmnfwkow9eur7hqi1q3gqcyeqzwc4auoflaao50oydo4esmwartdyumtomtb860wezdoz4dgioso2k87hk1ix10ku9ibjj1sti2a2o3p6wvx1sf000vw67nob0v4k0rq7rh3wvlopoxdwb7350p3j1hh8ndpzwukptdsneevlqarbjuene6x80kg7ow1y84clcg4z70gqx6aa4ellftz6o7sm22r5bevrmhdweablb21dhy7t9olhhmq2jl1rb4aj09ict47jhv2u29lbap7pdr7mvuzrrmnzs39a18ar4gxj7s547gf49hz7m2peroa8aoikuhutwsbz51b9rgji1wejg92ibaqynmyefx2ybm18k42dprr4iyyg5wh644jrmdaqi5yowcwaxp199dvt7swmem2fjwdr96kb373l9u7x2v0ifrcsw7kwiz7ecsyrwrimu552edohr6j4jmpasr0q98cozpzs7t8aahvr8nqglcki9gx62qx9tap22tt1as8r3x6az7ainoyiiyflv7t5it35ljtv7dpvlpk272ox7p783k8tsxb8ibqwqfi80z3hdmascj3rodhigcvpbhp3ukts7ehh0g == \8\j\p\g\6\a\c\u\c\2\8\g\w\0\7\l\b\1\s\a\m\r\1\0\b\8\l\k\h\n\p\e\f\8\5\d\t\y\w\i\a\a\s\5\0\5\v\w\v\l\c\7\e\7\y\k\x\x\o\b\v\1\l\4\q\x\q\t\1\g\x\x\n\j\m\d\7\6\m\7\g\1\k\t\y\1\g\o\f\o\d\s\4\r\9\t\c\o\z\v\q\w\v\t\c\g\h\j\u\x\o\4\9\l\x\w\v\h\w\s\h\z\1\3\8\h\q\i\a\y\3\s\s\b\b\g\f\7\o\9\a\4\h\u\o\4\0\v\z\o\l\t\r\5\c\l\t\x\i\a\w\r\8\o\y\o\w\t\d\4\y\p\8\o\9\8\r\m\d\l\y\h\2\s\c\q\x\l\2\8\h\b\l\7\2\e\j\u\4\d\m\6\5\w\k\b\h\e\0\c\j\6\7\3\b\8\s\v\z\y\8\u\e\8\m\3\u\4\c\v\r\r\r\1\f\u\b\1\j\x\8\g\g\m\m\q\4\p\y\u\i\v\2\a\x\b\u\j\8\7\s\c\i\7\b\m\b\w\u\3\v\n\e\j\u\u\h\j\5\a\a\y\p\k\8\h\p\c\y\f\0\o\l\f\k\6\w\k\1\3\1\x\w\z\9\t\f\5\v\n\2\g\6\v\u\v\h\h\i\g\q\k\o\b\w\m\m\k\7\4\s\8\l\l\s\5\s\g\5\k\z\h\3\9\7\y\t\y\n\5\8\8\r\5\1\g\s\n\p\7\0\u\z\1\a\b\h\d\t\k\6\u\9\a\t\u\4\3\y\0\j\7\j\z\d\e\3\1\c\q\c\4\z\4\p\j\5\0\c\o\u\l\8\c\m\6\8\y\t\p\9\n\y\i\b\a\1\5\u\w\5\w\h\4\7\h\p\x\j\6\8\y\2\b\s\h\2\5\g\9\z\u\z\2\v\q\g\4\b\m\4\h\u\o\n\x\7\o\j\2\t\i\q\2\e\3\e\d\9\o\w\0\q\o\3\d\5\1\9\3\d\z\1\o\h\b\5\i\p\6\r\d\m\j\y\y\o\n\m\2\f\v\8\n\t\6\x\z\4\w\8\b\2\w\m\f\1\6\6\a\y\u\s\j\y\f\j\m\o\b\i\b\d\i\5\b\t\a\9\6\i\7\b\3\q\u\1\g\f\h\7\w\u\h\s\v\m\s\p\8\i\l\p\o\3\1\e\4\h\v\y\c\7\y\r\0\0\h\m\l\g\u\p\x\4\z\r\g\4\w\2\y\n\2\2\5\3\5\k\s\i\2\2\o\8\u\j\q\c\7\g\z\m\w\e\3\x\9\j\1\p\s\5\w\7\4\s\d\0\k\9\x\u\u\f\c\t\l\d\r\q\s\v\t\3\d\v\g\k\3\b\t\d\k\n\j\2\b\g\i\f\e\2\u\f\z\i\l\v\m\7\i\9\7\q\b\l\1\1\y\g\5\3\f\n\0\e\l\2\j\z\1\0\2\t\x\e\u\s\1\i\s\e\l\6\i\n\i\d\y\v\i\s\q\y\e\z\5\c\x\9\h\a\a\t\f\t\g\b\z\h\h\p\z\b\e\d\4\e\1\e\y\4\3\j\h\h\b\w\n\u\3\4\i\c\f\l\f\r\9\j\p\j\4\c\y\s\x\9\r\r\v\m\x\9\q\3\i\x\6\f\p\k\m\3\1\i\e\y\9\d\b\s\l\1\b\z\d\f\w\8\b\q\w\i\e\e\4\j\1\n\a\c\w\v\m\y\8\x\7\2\h\7\u\f\k\a\z\7\c\k\h\f\p\w\8\m\h\w\y\p\6\7\u\s\4\u\l\g\x\z\t\0\i\g\j\p\l\i\2\q\1\a\p\l\d\j\q\x\g\3\f\r\e\9\d\c\1\7\5\n\p\i\k\5\h\q\2\a\4\h\n\8\v\1\k\8\j\q\r\7\1\y\v\3\o\u\p\2\g\y\u\f\4\i\u\g\f\6\l\5\c\d\m\3\f\y\c\z\3\4\8\n\u\x\f\e\o\e\z\f\r\h\4\r\c\8\q\r\s\4\e\r\4\9\a\i\h\1\8\9\q\w\s\x\b\z\9\5\4\g\y\t\y\s\i\5\u\z\f\d\q\d\o\5\0\2\i\9\g\8\n\i\8\b\c\9\t\a\q\k\b\e\z\1\t\t\s\p\8\d\8\v\5\e\9\6\g\6\9\z\e\0\h\m\n\4\1\9\h\l\w\f\3\s\4\o\d\p\r\g\m\x\4\q\i\6\2\c\e\i\6\1\p\4\u\j\h\8\r\4\l\y\h\f\c\c\r\1\n\3\n\t\2\7\g\j\t\u\s\9\a\q\s\m\q\s\g\r\m\9\p\1\z\f\i\5\e\v\n\p\1\g\z\m\a\z\t\y\n\k\m\d\e\w\y\y\j\6\o\o\u\a\d\1\9\e\o\p\m\5\s\f\u\d\f\d\r\k\r\k\n\q\q\o\z\n\z\3\f\o\b\s\2\a\6\s\u\2\t\x\0\d\b\7\9\y\6\6\4\c\s\d\y\n\v\3\c\x\f\x\h\d\g\5\b\a\n\s\q\0\x\b\1\f\v\k\p\a\d\2\k\a\4\9\g\a\q\y\u\b\x\0\z\v\z\6\v\m\n\z\k\g\d\e\s\8\c\6\9\m\t\f\k\r\1\m\6\o\2\l\y\z\g\l\l\0\8\i\9\u\e\j\6\6\i\t\l\o\q\z\3\x\k\q\3\a\3\4\7\5\0\f\k\8\p\z\q\i\7\i\7\a\g\t\0\e\h\w\r\6\d\t\7\9\f\h\5\5\y\5\q\h\7\y\c\r\u\g\z\x\9\a\1\n\v\j\8\h\f\b\b\j\4\d\v\3\g\2\5\s\9\9\x\m\y\o\w\j\e\w\5\r\m\8\0\3\m\j\7\c\n\1\j\k\z\0\9\a\5\r\g\k\f\3\l\0\p\v\m\o\h\8\g\t\f\h\m\m\u\5\q\f\c\m\f\s\s\o\r\f\p\1\z\z\q\3\g\x\x\e\t\5\k\a\b\d\p\w\v\9\w\y\y\r\t\1\7\e\4\p\3\4\5\n\2\f\3\9\y\p\d\o\c\6\3\o\i\u\s\g\e\1\7\4\w\n\v\1\8\w\e\q\c\8\7\s\n\a\x\n\n\a\e\l\6\1\7\6\2\1\s\l\b\q\z\1\6\8\t\7\w\r\y\q\0\i\0\8\h\m\j\b\w\0\s\q\h\0\5\h\u\k\q\8\7\n\t\s\7\1\u\n\b\b\u\2\q\h\f\l\p\g\j\m\h\a\3\x\2\f\t\g\a\b\q\k\l\8\q\w\4\1\t\c\a\m\q\n\q\h\h\x\n\q\t\w\o\h\d\h\b\l\7\h\b\j\u\8\u\0\z\b\4\g\t\u\8\b\o\s\s\l\a\9\g\a\r\f\x\7\a\d\i\z\5\m\0\m\a\c\m\y\h\8\j\g\x\2\m\0\t\4\6\f\x\n\s\1\k\m\z\p\f\e\t\u\1\h\c\4\6\n\m\3\s\u\6\m\n\c\r\k\z\3\q\m\o\8\m\t\j\0\w\f\x\8\f\3\v\x\m\a\0\z\a\i\f\u\p\w\k\d\8\j\c\7\j\o\k\x\y\r\7\e\p\x\c\9\l\4\f\8\z\4\w\d\0\h\8\6\x\d\b\o\z\h\r\q\3\3\s\h\w\t\o\e\s\g\y\4\m\s\5\4\p\0\v\l\n\i\v\d\5\7\n\x\i\i\b\b\0\5\s\g\p\p\9\7\3\d\t\4\e\c\d\4\8\j\f\j\s\f\j\6\k\8\p\2\r\q\2\a\v\0\5\x\2\n\e\8\l\0\7\3\d\l\5\r\v\p\5\a\h\d\w\m\g\5\c\v\v\p\4\z\e\l\4\j\g\c\5\e\8\z\g\9\b\k\r\k\0\l\k\3\y\1\m\k\8\p\l\q\o\g\6\m\z\y\c\b\1\p\c\x\z\l\f\o\p\5\n\r\7\4\n\d\h\e\8\p\d\l\u\2\i\4\y\x\i\k\5\b\l\l\4\t\e\n\o\y\e\t\h\v\9\b\k\w\7\m\k\2\u\o\m\n\y\3\6\4\k\s\p\l\d\i\7\l\j\t\7\4\a\v\b\i\1\v\t\z\3\0\d\1\f\z\u\p\8\0\j\5\s\t\0\w\c\q\w\m\a\l\h\b\t\p\g\k\9\w\8\4\q\p\f\y\l\7\8\j\9\z\k\e\z\e\b\d\c\3\w\7\q\2\u\6\u\d\z\9\h\1\o\5\s\h\g\x\z\e\2\j\4\8\c\8\v\i\2\3\g\d\4\f\o\u\k\e\w\7\w\a\6\a\q\m\7\b\l\9\0\s\v\s\o\e\k\n\c\r\a\q\6\s\7\l\r\t\m\6\9\v\m\k\h\d\g\n\q\v\a\v\4\3\a\t\z\w\l\v\e\7\m\6\3\q\c\9\0\r\c\s\1\w\h\h\f\8\8\h\d\a\8\h\o\y\v\r\a\3\k\5\o\o\9\q\u\a\n\3\u\e\r\p\y\1\6\2\s\o\8\y\1\5\s\t\p\5\i\x\t\1\p\q\g\1\g\n\7\t\0\k\d\q\w\s\e\c\4\4\u\m\6\a\t\p\k\1\y\b\6\d\7\u\h\9\o\k\p\i\h\m\x\j\n\r\i\9\6\v\d\3\h\x\q\6\6\l\d\z\u\t\a\q\1\l\0\m\4\6\d\1\8\1\9\l\p\h\u\o\c\a\e\e\s\x\x\a\1\r\m\f\5\w\s\3\y\v\1\d\u\9\8\k\c\3\p\p\b\h\4\9\v\y\6\b\q\y\k\p\a\n\l\7\v\v\x\m\w\i\m\s\i\j\w\w\g\j\e\b\s\8\6\w\7\5\d\s\5\k\z\v\3\z\t\s\k\4\1\w\2\f\9\1\8\d\1\q\j\n\y\0\0\l\j\5\7\5\x\j\d\s\c\0\2\l\l\e\h\3\h\u\z\3\6\d\g\r\o\p\9\g\g\j\j\y\8\0\b\3\n\p\x\x\5\d\n\w\s\w\m\c\d\n\d\f\c\d\b\z\8\v\o\5\9\i\a\s\7\a\t\1\q\r\d\r\k\t\7\e\r\n\7\4\g\1\4\w\s\b\c\i\1\a\s\u\t\5\z\o\k\9\p\z\p\c\3\a\i\e\x\1\w\8\l\4\k\d\c\s\v\r\l\e\z\x\0\x\6\q\k\h\b\g\k\e\w\i\f\g\w\j\a\m\3\h\e\7\r\c\1\s\r\1\4\4\e\1\y\n\v\c\o\v\2\u\n\i\6\q\v\x\4\x\1\z\n\4\m\3\d\2\5\w\9\i\g\o\1\n\z\m\8\v\8\5\3\1\v\v\m\6\y\f\1\3\m\o\l\v\7\i\9\h\n\a\f\z\1\0\t\b\6\l\j\f\x\o\i\s\p\9\w\z\h\4\x\g\3\k\8\b\p\w\x\e\7\8\4\m\z\8\e\5\r\f\6\5\g\h\h\i\p\j\0\y\2\p\8\4\m\1\9\h\0\2\3\r\m\5\5\g\t\o\p\2\0\7\g\r\i\l\t\d\x\0\5\9\l\q\a\u\2\r\e\z\2\g\v\o\6\z\y\9\y\i\g\q\6\n\l\a\f\l\e\w\r\g\8\v\0\x\r\l\8\4\g\4\8\q\n\3\5\2\a\b\z\b\u\f\e\v\r\6\i\l\5\7\m\h\d\y\9\n\0\o\2\v\5\i\q\p\p\7\z\e\t\z\g\9\7\d\b\z\2\j\x\u\5\0\6\a\g\t\7\4\2\o\z\5\1\7\x\5\j\z\f\c\c\t\r\7\0\u\j\v\7\t\l\i\k\u\1\o\o\b\a\d\f\w\n\o\r\z\e\4\9\u\a\k\f\m\3\p\6\u\6\w\b\m\u\6\e\n\s\e\h\v\s\y\m\f\y\7\u\o\d\3\g\g\q\e\3\s\f\w\s\i\e\k\i\d\2\p\y\1\x\i\x\3\i\1\s\3\6\v\4\b\2\r\x\a\n\6\m\t\p\6\1\i\2\o\n\n\3\d\l\y\f\1\x\c\o\w\2\8\y\f\r\p\d\7\w\r\y\k\m\e\f\v\v\9\j\c\y\g\4\1\e\k\d\b\0\5\t\2\b\8\3\9\i\t\4\d\e\2\r\8\8\z\s\i\b\y\s\j\y\i\z\i\1\0\6\k\u\g\9\o\0\x\0\i\g\k\u\6\o\5\k\d\v\k\v\r\o\f\0\6\n\y\i\j\l\9\r\5\9\m\e\8\p\g\h\9\v\h\5\h\3\i\4\b\m\a\t\3\u\a\u\w\w\3\j\w\h\t\e\o\f\i\m\i\n\q\r\f\n\7\4\o\z\e\b\z\r\w\q\s\e\y\2\l\r\o\i\b\i\b\g\6\7\6\3\8\k\u\8\g\v\j\6\2\u\v\q\s\j\6\m\7\r\w\p\x\5\x\v\n\m\8\a\b\n\v\r\b\y\x\l\g\u\n\h\7\w\t\f\t\g\y\6\i\a\1\p\t\o\5\j\2\r\g\8\f\u\h\y\m\k\0\2\f\w\d\t\l\7\g\1\k\m\o\j\0\i\o\z\x\3\u\9\g\i\x\8\7\d\o\l\j\n\p\d\d\e\2\y\7\d\l\2\5\m\2\n\0\9\q\q\t\f\1\c\f\t\i\v\5\c\w\6\0\4\g\a\c\y\1\b\y\k\w\3\6\i\z\1\9\5\v\5\9\u\g\6\3\f\d\3\7\h\0\d\j\f\1\o\8\v\o\e\l\7\m\o\r\0\z\e\y\0\6\z\0\n\w\2\2\4\b\z\1\7\q\b\6\s\z\8\w\y\a\f\i\z\r\v\h\u\t\y\a\z\8\n\d\8\s\t\2\k\0\4\1\d\5\i\d\b\f\7\d\v\b\p\t\l\v\3\6\y\a\q\f\7\q\0\x\t\w\v\k\v\q\5\u\c\6\7\p\l\5\c\n\f\d\w\t\g\h\2\j\f\h\3\s\m\3\s\s\0\5\y\i\5\h\f\7\8\2\m\9\f\i\n\4\7\3\8\p\p\b\z\b\d\0\e\d\1\4\8\2\k\r\5\h\o\j\6\p\t\1\n\o\o\q\o\j\8\g\3\p\7\9\x\n\w\0\2\7\b\c\s\y\z\w\u\y\2\x\w\z\t\s\u\b\u\e\h\d\h\2\c\t\l\1\z\8\p\y\8\m\a\m\q\z\j\g\z\2\i\f\f\j\o\7\t\h\q\o\y\i\2\a\y\c\m\0\s\4\j\b\c\i\o\j\b\9\6\w\i\w\n\e\i\4\l\8\l\s\g\h\j\7\2\z\9\z\9\d\h\v\m\p\9\9\6\q\q\0\7\i\i\5\o\m\f\s\y\p\h\l\p\u\n\j\d\b\j\v\o\4\z\w\5\j\a\y\2\h\n\j\5\r\k\n\i\h\u\t\e\r\4\h\g\d\u\n\4\u\w\c\4\i\8\t\w\u\1\f\3\i\k\l\5\n\h\k\z\a\f\t\v\p\r\m\n\f\w\k\o\w\9\e\u\r\7\h\q\i\1\q\3\g\q\c\y\e\q\z\w\c\4\a\u\o\f\l\a\a\o\5\0\o\y\d\o\4\e\s\m\w\a\r\t\d\y\u\m\t\o\m\t\b\8\6\0\w\e\z\d\o\z\4\d\g\i\o\s\o\2\k\8\7\h\k\1\i\x\1\0\k\u\9\i\b\j\j\1\s\t\i\2\a\2\o\3\p\6\w\v\x\1\s\f\0\0\0\v\w\6\7\n\o\b\0\v\4\k\0\r\q\7\r\h\3\w\v\l\o\p\o\x\d\w\b\7\3\5\0\p\3\j\1\h\h\8\n\d\p\z\w\u\k\p\t\d\s\n\e\e\v\l\q\a\r\b\j\u\e\n\e\6\x\8\0\k\g\7\o\w\1\y\8\4\c\l\c\g\4\z\7\0\g\q\x\6\a\a\4\e\l\l\f\t\z\6\o\7\s\m\2\2\r\5\b\e\v\r\m\h\d\w\e\a\b\l\b\2\1\d\h\y\7\t\9\o\l\h\h\m\q\2\j\l\1\r\b\4\a\j\0\9\i\c\t\4\7\j\h\v\2\u\2\9\l\b\a\p\7\p\d\r\7\m\v\u\z\r\r\m\n\z\s\3\9\a\1\8\a\r\4\g\x\j\7\s\5\4\7\g\f\4\9\h\z\7\m\2\p\e\r\o\a\8\a\o\i\k\u\h\u\t\w\s\b\z\5\1\b\9\r\g\j\i\1\w\e\j\g\9\2\i\b\a\q\y\n\m\y\e\f\x\2\y\b\m\1\8\k\4\2\d\p\r\r\4\i\y\y\g\5\w\h\6\4\4\j\r\m\d\a\q\i\5\y\o\w\c\w\a\x\p\1\9\9\d\v\t\7\s\w\m\e\m\2\f\j\w\d\r\9\6\k\b\3\7\3\l\9\u\7\x\2\v\0\i\f\r\c\s\w\7\k\w\i\z\7\e\c\s\y\r\w\r\i\m\u\5\5\2\e\d\o\h\r\6\j\4\j\m\p\a\s\r\0\q\9\8\c\o\z\p\z\s\7\t\8\a\a\h\v\r\8\n\q\g\l\c\k\i\9\g\x\6\2\q\x\9\t\a\p\2\2\t\t\1\a\s\8\r\3\x\6\a\z\7\a\i\n\o\y\i\i\y\f\l\v\7\t\5\i\t\3\5\l\j\t\v\7\d\p\v\l\p\k\2\7\2\o\x\7\p\7\8\3\k\8\t\s\x\b\8\i\b\q\w\q\f\i\8\0\z\3\h\d\m\a\s\c\j\3\r\o\d\h\i\g\c\v\p\b\h\p\3\u\k\t\s\7\e\h\h\0\g ]] 00:40:27.218 00:40:27.218 real 0m1.235s 00:40:27.218 user 0m0.893s 00:40:27.218 sys 0m0.439s 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:27.218 09:52:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:27.218 [2024-12-09 09:52:34.632996] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:27.218 [2024-12-09 09:52:34.633089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59987 ] 00:40:27.218 { 00:40:27.218 "subsystems": [ 00:40:27.218 { 00:40:27.218 "subsystem": "bdev", 00:40:27.218 "config": [ 00:40:27.218 { 00:40:27.218 "params": { 00:40:27.218 "trtype": "pcie", 00:40:27.218 "traddr": "0000:00:10.0", 00:40:27.218 "name": "Nvme0" 00:40:27.218 }, 00:40:27.218 "method": "bdev_nvme_attach_controller" 00:40:27.218 }, 00:40:27.218 { 00:40:27.218 "method": "bdev_wait_for_examine" 00:40:27.218 } 00:40:27.218 ] 00:40:27.218 } 00:40:27.218 ] 00:40:27.218 } 00:40:27.477 [2024-12-09 09:52:34.787912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.477 [2024-12-09 09:52:34.821268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.477 [2024-12-09 09:52:34.852983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:27.477  [2024-12-09T09:52:35.241Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:27.739 00:40:27.739 09:52:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:27.739 ************************************ 00:40:27.739 END TEST spdk_dd_basic_rw 00:40:27.739 ************************************ 00:40:27.739 00:40:27.739 real 0m17.303s 00:40:27.739 user 0m13.129s 00:40:27.739 sys 0m4.913s 00:40:27.739 09:52:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.739 09:52:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:27.739 09:52:35 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:40:27.739 09:52:35 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:27.739 09:52:35 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:27.739 09:52:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:40:27.739 ************************************ 00:40:27.739 START TEST spdk_dd_posix 00:40:27.739 ************************************ 00:40:27.739 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:40:27.999 * Looking for test storage... 00:40:27.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:27.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.999 --rc genhtml_branch_coverage=1 00:40:27.999 --rc genhtml_function_coverage=1 00:40:27.999 --rc genhtml_legend=1 00:40:27.999 --rc geninfo_all_blocks=1 00:40:27.999 --rc geninfo_unexecuted_blocks=1 00:40:27.999 00:40:27.999 ' 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:27.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.999 --rc genhtml_branch_coverage=1 00:40:27.999 --rc genhtml_function_coverage=1 00:40:27.999 --rc genhtml_legend=1 00:40:27.999 --rc geninfo_all_blocks=1 00:40:27.999 --rc geninfo_unexecuted_blocks=1 00:40:27.999 00:40:27.999 ' 00:40:27.999 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:27.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.999 --rc genhtml_branch_coverage=1 00:40:27.999 --rc genhtml_function_coverage=1 00:40:27.999 --rc genhtml_legend=1 00:40:28.000 --rc geninfo_all_blocks=1 00:40:28.000 --rc geninfo_unexecuted_blocks=1 00:40:28.000 00:40:28.000 ' 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:28.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.000 --rc genhtml_branch_coverage=1 00:40:28.000 --rc genhtml_function_coverage=1 00:40:28.000 --rc genhtml_legend=1 00:40:28.000 --rc geninfo_all_blocks=1 00:40:28.000 --rc geninfo_unexecuted_blocks=1 00:40:28.000 00:40:28.000 ' 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:40:28.000 * First test run, liburing in use 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:28.000 ************************************ 00:40:28.000 START TEST dd_flag_append 00:40:28.000 ************************************ 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=ckj1ozbf8emyn3lxyidoog7ym5wm9q9b 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=bsfzcg37xsqtoq5b67jtondm0wlcmssy 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s ckj1ozbf8emyn3lxyidoog7ym5wm9q9b 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s bsfzcg37xsqtoq5b67jtondm0wlcmssy 00:40:28.000 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:40:28.258 [2024-12-09 09:52:35.521337] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:28.258 [2024-12-09 09:52:35.521682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60059 ] 00:40:28.258 [2024-12-09 09:52:35.675328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.258 [2024-12-09 09:52:35.710631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.258 [2024-12-09 09:52:35.741518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:28.517  [2024-12-09T09:52:36.019Z] Copying: 32/32 [B] (average 31 kBps) 00:40:28.517 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ bsfzcg37xsqtoq5b67jtondm0wlcmssyckj1ozbf8emyn3lxyidoog7ym5wm9q9b == \b\s\f\z\c\g\3\7\x\s\q\t\o\q\5\b\6\7\j\t\o\n\d\m\0\w\l\c\m\s\s\y\c\k\j\1\o\z\b\f\8\e\m\y\n\3\l\x\y\i\d\o\o\g\7\y\m\5\w\m\9\q\9\b ]] 00:40:28.517 00:40:28.517 real 0m0.494s 00:40:28.517 user 0m0.292s 00:40:28.517 sys 0m0.172s 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:40:28.517 ************************************ 00:40:28.517 END TEST dd_flag_append 00:40:28.517 ************************************ 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:28.517 ************************************ 00:40:28.517 START TEST dd_flag_directory 00:40:28.517 ************************************ 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:28.517 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:28.517 09:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:28.517 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:28.517 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:28.517 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:28.775 [2024-12-09 09:52:36.061776] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:28.775 [2024-12-09 09:52:36.061885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60082 ] 00:40:28.775 [2024-12-09 09:52:36.211840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.775 [2024-12-09 09:52:36.256875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.034 [2024-12-09 09:52:36.288246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:29.034 [2024-12-09 09:52:36.307402] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:29.034 [2024-12-09 09:52:36.307648] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:29.034 [2024-12-09 09:52:36.307667] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:29.034 [2024-12-09 09:52:36.370815] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:29.034 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:29.034 [2024-12-09 09:52:36.534070] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:29.293 [2024-12-09 09:52:36.534368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60097 ] 00:40:29.293 [2024-12-09 09:52:36.682567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:29.293 [2024-12-09 09:52:36.716114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.293 [2024-12-09 09:52:36.745957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:29.293 [2024-12-09 09:52:36.765519] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:29.293 [2024-12-09 09:52:36.765576] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:29.293 [2024-12-09 09:52:36.765591] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:29.551 [2024-12-09 09:52:36.830133] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:40:29.551 ************************************ 00:40:29.551 END TEST dd_flag_directory 00:40:29.551 ************************************ 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:29.551 00:40:29.551 real 0m0.953s 00:40:29.551 user 0m0.541s 00:40:29.551 sys 0m0.203s 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:29.551 ************************************ 00:40:29.551 START TEST dd_flag_nofollow 00:40:29.551 ************************************ 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:29.551 09:52:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:29.551 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:29.552 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:29.552 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:29.552 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:29.809 [2024-12-09 09:52:37.063990] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:29.809 [2024-12-09 09:52:37.064113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60120 ] 00:40:29.809 [2024-12-09 09:52:37.220252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:29.809 [2024-12-09 09:52:37.260399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.809 [2024-12-09 09:52:37.295251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:30.068 [2024-12-09 09:52:37.318403] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:30.068 [2024-12-09 09:52:37.318482] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:30.068 [2024-12-09 09:52:37.318509] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:30.068 [2024-12-09 09:52:37.398061] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:30.068 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:30.069 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:30.069 [2024-12-09 09:52:37.564659] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:30.069 [2024-12-09 09:52:37.564749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60135 ] 00:40:30.328 [2024-12-09 09:52:37.711801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.328 [2024-12-09 09:52:37.744588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.328 [2024-12-09 09:52:37.776234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:30.328 [2024-12-09 09:52:37.797071] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:30.328 [2024-12-09 09:52:37.797135] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:30.328 [2024-12-09 09:52:37.797151] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:30.586 [2024-12-09 09:52:37.864952] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:40:30.586 09:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:30.586 [2024-12-09 09:52:38.042191] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:30.586 [2024-12-09 09:52:38.042292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60137 ] 00:40:30.844 [2024-12-09 09:52:38.190018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.844 [2024-12-09 09:52:38.224002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.844 [2024-12-09 09:52:38.254848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:30.844  [2024-12-09T09:52:38.604Z] Copying: 512/512 [B] (average 500 kBps) 00:40:31.102 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 248x5fwb1h2cy3au2bnkbmzwexzazbo05i3607cpni0drr6s4uruk8hc2aisw3zayhe40wyl6m460kaaqnw0e9vlf27hu6le72r5mb2ys3kf18p9bn9hgwikuj75vmtn53ix44ve6mmo577vm2d00tehs9iujpvb94ul96zdlq4gnpzkfafruf5gsmlax677md1ybo2cscni7hs1h6wisflzbr6wrhn4v7ebad71gsh884k9fxbb3ymlipcdvrgvy7zuru1xd6tz0iu3ji0nlta0tnfgksx3fx5e5lzujunbg552s6glo57q6q27fib4bhrj1u0wo63dfwcm5mlmkbabnz0nqcyxwb9il52vpt8noi8la34fbtibsyxt3ij1wovhu7sio2b6778sciuwwmof6ym7tpfwgljfh2hvvvlh6nql72bwdojp2nskyf2iqtqri4qop9sa9rovx3fz1gfgr3n9nq6zgpdl6ohrsz7cjca7y6pv999vjhvbxzl9 == \2\4\8\x\5\f\w\b\1\h\2\c\y\3\a\u\2\b\n\k\b\m\z\w\e\x\z\a\z\b\o\0\5\i\3\6\0\7\c\p\n\i\0\d\r\r\6\s\4\u\r\u\k\8\h\c\2\a\i\s\w\3\z\a\y\h\e\4\0\w\y\l\6\m\4\6\0\k\a\a\q\n\w\0\e\9\v\l\f\2\7\h\u\6\l\e\7\2\r\5\m\b\2\y\s\3\k\f\1\8\p\9\b\n\9\h\g\w\i\k\u\j\7\5\v\m\t\n\5\3\i\x\4\4\v\e\6\m\m\o\5\7\7\v\m\2\d\0\0\t\e\h\s\9\i\u\j\p\v\b\9\4\u\l\9\6\z\d\l\q\4\g\n\p\z\k\f\a\f\r\u\f\5\g\s\m\l\a\x\6\7\7\m\d\1\y\b\o\2\c\s\c\n\i\7\h\s\1\h\6\w\i\s\f\l\z\b\r\6\w\r\h\n\4\v\7\e\b\a\d\7\1\g\s\h\8\8\4\k\9\f\x\b\b\3\y\m\l\i\p\c\d\v\r\g\v\y\7\z\u\r\u\1\x\d\6\t\z\0\i\u\3\j\i\0\n\l\t\a\0\t\n\f\g\k\s\x\3\f\x\5\e\5\l\z\u\j\u\n\b\g\5\5\2\s\6\g\l\o\5\7\q\6\q\2\7\f\i\b\4\b\h\r\j\1\u\0\w\o\6\3\d\f\w\c\m\5\m\l\m\k\b\a\b\n\z\0\n\q\c\y\x\w\b\9\i\l\5\2\v\p\t\8\n\o\i\8\l\a\3\4\f\b\t\i\b\s\y\x\t\3\i\j\1\w\o\v\h\u\7\s\i\o\2\b\6\7\7\8\s\c\i\u\w\w\m\o\f\6\y\m\7\t\p\f\w\g\l\j\f\h\2\h\v\v\v\l\h\6\n\q\l\7\2\b\w\d\o\j\p\2\n\s\k\y\f\2\i\q\t\q\r\i\4\q\o\p\9\s\a\9\r\o\v\x\3\f\z\1\g\f\g\r\3\n\9\n\q\6\z\g\p\d\l\6\o\h\r\s\z\7\c\j\c\a\7\y\6\p\v\9\9\9\v\j\h\v\b\x\z\l\9 ]] 00:40:31.102 00:40:31.102 real 0m1.461s 00:40:31.102 user 0m0.849s 00:40:31.102 sys 0m0.380s 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:40:31.102 ************************************ 00:40:31.102 END TEST dd_flag_nofollow 00:40:31.102 ************************************ 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:31.102 ************************************ 00:40:31.102 START TEST dd_flag_noatime 00:40:31.102 ************************************ 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733737958 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733737958 00:40:31.102 09:52:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:40:32.475 09:52:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:32.475 [2024-12-09 09:52:39.592145] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:32.475 [2024-12-09 09:52:39.592249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60185 ] 00:40:32.475 [2024-12-09 09:52:39.747663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.475 [2024-12-09 09:52:39.782911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:32.475 [2024-12-09 09:52:39.814108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:32.475  [2024-12-09T09:52:40.236Z] Copying: 512/512 [B] (average 500 kBps) 00:40:32.734 00:40:32.734 09:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:32.734 09:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733737958 )) 00:40:32.734 09:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:32.734 09:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733737958 )) 00:40:32.734 09:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:32.734 [2024-12-09 09:52:40.068475] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:32.734 [2024-12-09 09:52:40.068557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60193 ] 00:40:32.734 [2024-12-09 09:52:40.219151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.994 [2024-12-09 09:52:40.260374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:32.994 [2024-12-09 09:52:40.294058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:32.994  [2024-12-09T09:52:40.759Z] Copying: 512/512 [B] (average 500 kBps) 00:40:33.257 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733737960 )) 00:40:33.257 00:40:33.257 real 0m2.001s 00:40:33.257 user 0m0.570s 00:40:33.257 sys 0m0.387s 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:40:33.257 ************************************ 00:40:33.257 END TEST dd_flag_noatime 00:40:33.257 ************************************ 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:33.257 ************************************ 00:40:33.257 START TEST dd_flags_misc 00:40:33.257 ************************************ 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:33.257 09:52:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:33.257 [2024-12-09 09:52:40.628651] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:33.257 [2024-12-09 09:52:40.628751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60229 ] 00:40:33.516 [2024-12-09 09:52:40.782592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.516 [2024-12-09 09:52:40.816327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.516 [2024-12-09 09:52:40.847126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:33.516  [2024-12-09T09:52:41.277Z] Copying: 512/512 [B] (average 500 kBps) 00:40:33.775 00:40:33.775 09:52:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 15g57hijetv1vznunbeclcovovkkt6svicxlwaeyaieifh42ir671bjtke725516jpyl5yra9xji0xtfybofzrcvvfqued4xqmthy6njpm1jovwww1523y3oghdojcnlm5itexk9fznzw9aippgklo9zg0oq8y3tx64zq9cbqih1b4eyjvaf2gp899yq8fqel1dcespaao5cblfsydxxndsnpdzs56mlwtnj2w5mvc1gg759k5nlylpoyvlgkcmbtpwyij0qjc4w6yjd9gdc5728su3cjv8om5s8famceoxssj3p80jxtcejld7cc0azuhmn9j94fflm3p42gvedmavzy3ep8g6j81cawzx4dbnnv4sqgvkaqd3ab7l8gzuqcrlciui37mbnunxwp5claiq5zh6rora5stomrtu7rwk236asar3l6say3rdhvz8otngipooeas9yopxpgw9tolj4cn2037cubwp1m51nsw9f1oh50605sda9c8ejgkyf == \1\5\g\5\7\h\i\j\e\t\v\1\v\z\n\u\n\b\e\c\l\c\o\v\o\v\k\k\t\6\s\v\i\c\x\l\w\a\e\y\a\i\e\i\f\h\4\2\i\r\6\7\1\b\j\t\k\e\7\2\5\5\1\6\j\p\y\l\5\y\r\a\9\x\j\i\0\x\t\f\y\b\o\f\z\r\c\v\v\f\q\u\e\d\4\x\q\m\t\h\y\6\n\j\p\m\1\j\o\v\w\w\w\1\5\2\3\y\3\o\g\h\d\o\j\c\n\l\m\5\i\t\e\x\k\9\f\z\n\z\w\9\a\i\p\p\g\k\l\o\9\z\g\0\o\q\8\y\3\t\x\6\4\z\q\9\c\b\q\i\h\1\b\4\e\y\j\v\a\f\2\g\p\8\9\9\y\q\8\f\q\e\l\1\d\c\e\s\p\a\a\o\5\c\b\l\f\s\y\d\x\x\n\d\s\n\p\d\z\s\5\6\m\l\w\t\n\j\2\w\5\m\v\c\1\g\g\7\5\9\k\5\n\l\y\l\p\o\y\v\l\g\k\c\m\b\t\p\w\y\i\j\0\q\j\c\4\w\6\y\j\d\9\g\d\c\5\7\2\8\s\u\3\c\j\v\8\o\m\5\s\8\f\a\m\c\e\o\x\s\s\j\3\p\8\0\j\x\t\c\e\j\l\d\7\c\c\0\a\z\u\h\m\n\9\j\9\4\f\f\l\m\3\p\4\2\g\v\e\d\m\a\v\z\y\3\e\p\8\g\6\j\8\1\c\a\w\z\x\4\d\b\n\n\v\4\s\q\g\v\k\a\q\d\3\a\b\7\l\8\g\z\u\q\c\r\l\c\i\u\i\3\7\m\b\n\u\n\x\w\p\5\c\l\a\i\q\5\z\h\6\r\o\r\a\5\s\t\o\m\r\t\u\7\r\w\k\2\3\6\a\s\a\r\3\l\6\s\a\y\3\r\d\h\v\z\8\o\t\n\g\i\p\o\o\e\a\s\9\y\o\p\x\p\g\w\9\t\o\l\j\4\c\n\2\0\3\7\c\u\b\w\p\1\m\5\1\n\s\w\9\f\1\o\h\5\0\6\0\5\s\d\a\9\c\8\e\j\g\k\y\f ]] 00:40:33.775 09:52:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:33.775 09:52:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:33.775 [2024-12-09 09:52:41.117915] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:33.775 [2024-12-09 09:52:41.118048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60233 ] 00:40:33.775 [2024-12-09 09:52:41.272557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.033 [2024-12-09 09:52:41.307666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.033 [2024-12-09 09:52:41.341326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:34.033  [2024-12-09T09:52:41.793Z] Copying: 512/512 [B] (average 500 kBps) 00:40:34.291 00:40:34.291 09:52:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 15g57hijetv1vznunbeclcovovkkt6svicxlwaeyaieifh42ir671bjtke725516jpyl5yra9xji0xtfybofzrcvvfqued4xqmthy6njpm1jovwww1523y3oghdojcnlm5itexk9fznzw9aippgklo9zg0oq8y3tx64zq9cbqih1b4eyjvaf2gp899yq8fqel1dcespaao5cblfsydxxndsnpdzs56mlwtnj2w5mvc1gg759k5nlylpoyvlgkcmbtpwyij0qjc4w6yjd9gdc5728su3cjv8om5s8famceoxssj3p80jxtcejld7cc0azuhmn9j94fflm3p42gvedmavzy3ep8g6j81cawzx4dbnnv4sqgvkaqd3ab7l8gzuqcrlciui37mbnunxwp5claiq5zh6rora5stomrtu7rwk236asar3l6say3rdhvz8otngipooeas9yopxpgw9tolj4cn2037cubwp1m51nsw9f1oh50605sda9c8ejgkyf == \1\5\g\5\7\h\i\j\e\t\v\1\v\z\n\u\n\b\e\c\l\c\o\v\o\v\k\k\t\6\s\v\i\c\x\l\w\a\e\y\a\i\e\i\f\h\4\2\i\r\6\7\1\b\j\t\k\e\7\2\5\5\1\6\j\p\y\l\5\y\r\a\9\x\j\i\0\x\t\f\y\b\o\f\z\r\c\v\v\f\q\u\e\d\4\x\q\m\t\h\y\6\n\j\p\m\1\j\o\v\w\w\w\1\5\2\3\y\3\o\g\h\d\o\j\c\n\l\m\5\i\t\e\x\k\9\f\z\n\z\w\9\a\i\p\p\g\k\l\o\9\z\g\0\o\q\8\y\3\t\x\6\4\z\q\9\c\b\q\i\h\1\b\4\e\y\j\v\a\f\2\g\p\8\9\9\y\q\8\f\q\e\l\1\d\c\e\s\p\a\a\o\5\c\b\l\f\s\y\d\x\x\n\d\s\n\p\d\z\s\5\6\m\l\w\t\n\j\2\w\5\m\v\c\1\g\g\7\5\9\k\5\n\l\y\l\p\o\y\v\l\g\k\c\m\b\t\p\w\y\i\j\0\q\j\c\4\w\6\y\j\d\9\g\d\c\5\7\2\8\s\u\3\c\j\v\8\o\m\5\s\8\f\a\m\c\e\o\x\s\s\j\3\p\8\0\j\x\t\c\e\j\l\d\7\c\c\0\a\z\u\h\m\n\9\j\9\4\f\f\l\m\3\p\4\2\g\v\e\d\m\a\v\z\y\3\e\p\8\g\6\j\8\1\c\a\w\z\x\4\d\b\n\n\v\4\s\q\g\v\k\a\q\d\3\a\b\7\l\8\g\z\u\q\c\r\l\c\i\u\i\3\7\m\b\n\u\n\x\w\p\5\c\l\a\i\q\5\z\h\6\r\o\r\a\5\s\t\o\m\r\t\u\7\r\w\k\2\3\6\a\s\a\r\3\l\6\s\a\y\3\r\d\h\v\z\8\o\t\n\g\i\p\o\o\e\a\s\9\y\o\p\x\p\g\w\9\t\o\l\j\4\c\n\2\0\3\7\c\u\b\w\p\1\m\5\1\n\s\w\9\f\1\o\h\5\0\6\0\5\s\d\a\9\c\8\e\j\g\k\y\f ]] 00:40:34.291 09:52:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:34.291 09:52:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:34.291 [2024-12-09 09:52:41.612525] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:34.291 [2024-12-09 09:52:41.612627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60248 ] 00:40:34.291 [2024-12-09 09:52:41.767844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.549 [2024-12-09 09:52:41.802210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.549 [2024-12-09 09:52:41.833096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:34.549  [2024-12-09T09:52:42.051Z] Copying: 512/512 [B] (average 250 kBps) 00:40:34.549 00:40:34.549 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 15g57hijetv1vznunbeclcovovkkt6svicxlwaeyaieifh42ir671bjtke725516jpyl5yra9xji0xtfybofzrcvvfqued4xqmthy6njpm1jovwww1523y3oghdojcnlm5itexk9fznzw9aippgklo9zg0oq8y3tx64zq9cbqih1b4eyjvaf2gp899yq8fqel1dcespaao5cblfsydxxndsnpdzs56mlwtnj2w5mvc1gg759k5nlylpoyvlgkcmbtpwyij0qjc4w6yjd9gdc5728su3cjv8om5s8famceoxssj3p80jxtcejld7cc0azuhmn9j94fflm3p42gvedmavzy3ep8g6j81cawzx4dbnnv4sqgvkaqd3ab7l8gzuqcrlciui37mbnunxwp5claiq5zh6rora5stomrtu7rwk236asar3l6say3rdhvz8otngipooeas9yopxpgw9tolj4cn2037cubwp1m51nsw9f1oh50605sda9c8ejgkyf == \1\5\g\5\7\h\i\j\e\t\v\1\v\z\n\u\n\b\e\c\l\c\o\v\o\v\k\k\t\6\s\v\i\c\x\l\w\a\e\y\a\i\e\i\f\h\4\2\i\r\6\7\1\b\j\t\k\e\7\2\5\5\1\6\j\p\y\l\5\y\r\a\9\x\j\i\0\x\t\f\y\b\o\f\z\r\c\v\v\f\q\u\e\d\4\x\q\m\t\h\y\6\n\j\p\m\1\j\o\v\w\w\w\1\5\2\3\y\3\o\g\h\d\o\j\c\n\l\m\5\i\t\e\x\k\9\f\z\n\z\w\9\a\i\p\p\g\k\l\o\9\z\g\0\o\q\8\y\3\t\x\6\4\z\q\9\c\b\q\i\h\1\b\4\e\y\j\v\a\f\2\g\p\8\9\9\y\q\8\f\q\e\l\1\d\c\e\s\p\a\a\o\5\c\b\l\f\s\y\d\x\x\n\d\s\n\p\d\z\s\5\6\m\l\w\t\n\j\2\w\5\m\v\c\1\g\g\7\5\9\k\5\n\l\y\l\p\o\y\v\l\g\k\c\m\b\t\p\w\y\i\j\0\q\j\c\4\w\6\y\j\d\9\g\d\c\5\7\2\8\s\u\3\c\j\v\8\o\m\5\s\8\f\a\m\c\e\o\x\s\s\j\3\p\8\0\j\x\t\c\e\j\l\d\7\c\c\0\a\z\u\h\m\n\9\j\9\4\f\f\l\m\3\p\4\2\g\v\e\d\m\a\v\z\y\3\e\p\8\g\6\j\8\1\c\a\w\z\x\4\d\b\n\n\v\4\s\q\g\v\k\a\q\d\3\a\b\7\l\8\g\z\u\q\c\r\l\c\i\u\i\3\7\m\b\n\u\n\x\w\p\5\c\l\a\i\q\5\z\h\6\r\o\r\a\5\s\t\o\m\r\t\u\7\r\w\k\2\3\6\a\s\a\r\3\l\6\s\a\y\3\r\d\h\v\z\8\o\t\n\g\i\p\o\o\e\a\s\9\y\o\p\x\p\g\w\9\t\o\l\j\4\c\n\2\0\3\7\c\u\b\w\p\1\m\5\1\n\s\w\9\f\1\o\h\5\0\6\0\5\s\d\a\9\c\8\e\j\g\k\y\f ]] 00:40:34.549 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:34.549 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:34.808 [2024-12-09 09:52:42.092909] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:34.808 [2024-12-09 09:52:42.093089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60252 ] 00:40:34.808 [2024-12-09 09:52:42.244175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.808 [2024-12-09 09:52:42.278655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.067 [2024-12-09 09:52:42.310069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:35.067  [2024-12-09T09:52:42.569Z] Copying: 512/512 [B] (average 250 kBps) 00:40:35.067 00:40:35.067 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 15g57hijetv1vznunbeclcovovkkt6svicxlwaeyaieifh42ir671bjtke725516jpyl5yra9xji0xtfybofzrcvvfqued4xqmthy6njpm1jovwww1523y3oghdojcnlm5itexk9fznzw9aippgklo9zg0oq8y3tx64zq9cbqih1b4eyjvaf2gp899yq8fqel1dcespaao5cblfsydxxndsnpdzs56mlwtnj2w5mvc1gg759k5nlylpoyvlgkcmbtpwyij0qjc4w6yjd9gdc5728su3cjv8om5s8famceoxssj3p80jxtcejld7cc0azuhmn9j94fflm3p42gvedmavzy3ep8g6j81cawzx4dbnnv4sqgvkaqd3ab7l8gzuqcrlciui37mbnunxwp5claiq5zh6rora5stomrtu7rwk236asar3l6say3rdhvz8otngipooeas9yopxpgw9tolj4cn2037cubwp1m51nsw9f1oh50605sda9c8ejgkyf == \1\5\g\5\7\h\i\j\e\t\v\1\v\z\n\u\n\b\e\c\l\c\o\v\o\v\k\k\t\6\s\v\i\c\x\l\w\a\e\y\a\i\e\i\f\h\4\2\i\r\6\7\1\b\j\t\k\e\7\2\5\5\1\6\j\p\y\l\5\y\r\a\9\x\j\i\0\x\t\f\y\b\o\f\z\r\c\v\v\f\q\u\e\d\4\x\q\m\t\h\y\6\n\j\p\m\1\j\o\v\w\w\w\1\5\2\3\y\3\o\g\h\d\o\j\c\n\l\m\5\i\t\e\x\k\9\f\z\n\z\w\9\a\i\p\p\g\k\l\o\9\z\g\0\o\q\8\y\3\t\x\6\4\z\q\9\c\b\q\i\h\1\b\4\e\y\j\v\a\f\2\g\p\8\9\9\y\q\8\f\q\e\l\1\d\c\e\s\p\a\a\o\5\c\b\l\f\s\y\d\x\x\n\d\s\n\p\d\z\s\5\6\m\l\w\t\n\j\2\w\5\m\v\c\1\g\g\7\5\9\k\5\n\l\y\l\p\o\y\v\l\g\k\c\m\b\t\p\w\y\i\j\0\q\j\c\4\w\6\y\j\d\9\g\d\c\5\7\2\8\s\u\3\c\j\v\8\o\m\5\s\8\f\a\m\c\e\o\x\s\s\j\3\p\8\0\j\x\t\c\e\j\l\d\7\c\c\0\a\z\u\h\m\n\9\j\9\4\f\f\l\m\3\p\4\2\g\v\e\d\m\a\v\z\y\3\e\p\8\g\6\j\8\1\c\a\w\z\x\4\d\b\n\n\v\4\s\q\g\v\k\a\q\d\3\a\b\7\l\8\g\z\u\q\c\r\l\c\i\u\i\3\7\m\b\n\u\n\x\w\p\5\c\l\a\i\q\5\z\h\6\r\o\r\a\5\s\t\o\m\r\t\u\7\r\w\k\2\3\6\a\s\a\r\3\l\6\s\a\y\3\r\d\h\v\z\8\o\t\n\g\i\p\o\o\e\a\s\9\y\o\p\x\p\g\w\9\t\o\l\j\4\c\n\2\0\3\7\c\u\b\w\p\1\m\5\1\n\s\w\9\f\1\o\h\5\0\6\0\5\s\d\a\9\c\8\e\j\g\k\y\f ]] 00:40:35.067 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:35.067 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:40:35.067 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:40:35.067 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:40:35.067 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:35.067 09:52:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:35.325 [2024-12-09 09:52:42.576072] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:35.325 [2024-12-09 09:52:42.576200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60267 ] 00:40:35.325 [2024-12-09 09:52:42.731442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.325 [2024-12-09 09:52:42.767327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.325 [2024-12-09 09:52:42.799697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:35.325  [2024-12-09T09:52:43.086Z] Copying: 512/512 [B] (average 500 kBps) 00:40:35.584 00:40:35.584 09:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8sid6pm74o1ruoedu1eqkd4krerhziuya9m7n8gpiqgfwzgsrjb6hw7oxk1j4iaf821ujdsaz6hhglizrjapdc1qc8iymy0ix9qxgvz7ipq2qfduzxbfu4lkq9dos1ik3871a7qq9pct07bpt9q51voe5dutfze3gn35u8zx9j3tw45xcre89axj1rhe6ad4opcju2ua4a3mheaaakkogrzucmif0mnpkwvndjbhbqvu58xfgui4d080l5h0ajkspzy4qore3ykwok3i8zc3hlo1q0ih74pg926spr1gemtgapkn07b2cl2cqth30hl2c0fgeo9w51fk8rapvkg04dfyndhpzmogwudfp0w98ahyybzd4y25su3eg8tvbhjvxr3mp5enbwh7u08eaqadf9mfvdmee2eiyf89o1yvg1h289tf50byovivjpkdyeul117z9ixsyd0ko6kzm05ycflceew8g5kp791mhc8ef7g49b3ss5wnc0onz7bnoo1q == \8\s\i\d\6\p\m\7\4\o\1\r\u\o\e\d\u\1\e\q\k\d\4\k\r\e\r\h\z\i\u\y\a\9\m\7\n\8\g\p\i\q\g\f\w\z\g\s\r\j\b\6\h\w\7\o\x\k\1\j\4\i\a\f\8\2\1\u\j\d\s\a\z\6\h\h\g\l\i\z\r\j\a\p\d\c\1\q\c\8\i\y\m\y\0\i\x\9\q\x\g\v\z\7\i\p\q\2\q\f\d\u\z\x\b\f\u\4\l\k\q\9\d\o\s\1\i\k\3\8\7\1\a\7\q\q\9\p\c\t\0\7\b\p\t\9\q\5\1\v\o\e\5\d\u\t\f\z\e\3\g\n\3\5\u\8\z\x\9\j\3\t\w\4\5\x\c\r\e\8\9\a\x\j\1\r\h\e\6\a\d\4\o\p\c\j\u\2\u\a\4\a\3\m\h\e\a\a\a\k\k\o\g\r\z\u\c\m\i\f\0\m\n\p\k\w\v\n\d\j\b\h\b\q\v\u\5\8\x\f\g\u\i\4\d\0\8\0\l\5\h\0\a\j\k\s\p\z\y\4\q\o\r\e\3\y\k\w\o\k\3\i\8\z\c\3\h\l\o\1\q\0\i\h\7\4\p\g\9\2\6\s\p\r\1\g\e\m\t\g\a\p\k\n\0\7\b\2\c\l\2\c\q\t\h\3\0\h\l\2\c\0\f\g\e\o\9\w\5\1\f\k\8\r\a\p\v\k\g\0\4\d\f\y\n\d\h\p\z\m\o\g\w\u\d\f\p\0\w\9\8\a\h\y\y\b\z\d\4\y\2\5\s\u\3\e\g\8\t\v\b\h\j\v\x\r\3\m\p\5\e\n\b\w\h\7\u\0\8\e\a\q\a\d\f\9\m\f\v\d\m\e\e\2\e\i\y\f\8\9\o\1\y\v\g\1\h\2\8\9\t\f\5\0\b\y\o\v\i\v\j\p\k\d\y\e\u\l\1\1\7\z\9\i\x\s\y\d\0\k\o\6\k\z\m\0\5\y\c\f\l\c\e\e\w\8\g\5\k\p\7\9\1\m\h\c\8\e\f\7\g\4\9\b\3\s\s\5\w\n\c\0\o\n\z\7\b\n\o\o\1\q ]] 00:40:35.584 09:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:35.584 09:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:35.584 [2024-12-09 09:52:43.063066] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:35.584 [2024-12-09 09:52:43.063178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60271 ] 00:40:35.842 [2024-12-09 09:52:43.216770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.842 [2024-12-09 09:52:43.251457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.842 [2024-12-09 09:52:43.283736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:35.842  [2024-12-09T09:52:43.601Z] Copying: 512/512 [B] (average 500 kBps) 00:40:36.099 00:40:36.100 09:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8sid6pm74o1ruoedu1eqkd4krerhziuya9m7n8gpiqgfwzgsrjb6hw7oxk1j4iaf821ujdsaz6hhglizrjapdc1qc8iymy0ix9qxgvz7ipq2qfduzxbfu4lkq9dos1ik3871a7qq9pct07bpt9q51voe5dutfze3gn35u8zx9j3tw45xcre89axj1rhe6ad4opcju2ua4a3mheaaakkogrzucmif0mnpkwvndjbhbqvu58xfgui4d080l5h0ajkspzy4qore3ykwok3i8zc3hlo1q0ih74pg926spr1gemtgapkn07b2cl2cqth30hl2c0fgeo9w51fk8rapvkg04dfyndhpzmogwudfp0w98ahyybzd4y25su3eg8tvbhjvxr3mp5enbwh7u08eaqadf9mfvdmee2eiyf89o1yvg1h289tf50byovivjpkdyeul117z9ixsyd0ko6kzm05ycflceew8g5kp791mhc8ef7g49b3ss5wnc0onz7bnoo1q == \8\s\i\d\6\p\m\7\4\o\1\r\u\o\e\d\u\1\e\q\k\d\4\k\r\e\r\h\z\i\u\y\a\9\m\7\n\8\g\p\i\q\g\f\w\z\g\s\r\j\b\6\h\w\7\o\x\k\1\j\4\i\a\f\8\2\1\u\j\d\s\a\z\6\h\h\g\l\i\z\r\j\a\p\d\c\1\q\c\8\i\y\m\y\0\i\x\9\q\x\g\v\z\7\i\p\q\2\q\f\d\u\z\x\b\f\u\4\l\k\q\9\d\o\s\1\i\k\3\8\7\1\a\7\q\q\9\p\c\t\0\7\b\p\t\9\q\5\1\v\o\e\5\d\u\t\f\z\e\3\g\n\3\5\u\8\z\x\9\j\3\t\w\4\5\x\c\r\e\8\9\a\x\j\1\r\h\e\6\a\d\4\o\p\c\j\u\2\u\a\4\a\3\m\h\e\a\a\a\k\k\o\g\r\z\u\c\m\i\f\0\m\n\p\k\w\v\n\d\j\b\h\b\q\v\u\5\8\x\f\g\u\i\4\d\0\8\0\l\5\h\0\a\j\k\s\p\z\y\4\q\o\r\e\3\y\k\w\o\k\3\i\8\z\c\3\h\l\o\1\q\0\i\h\7\4\p\g\9\2\6\s\p\r\1\g\e\m\t\g\a\p\k\n\0\7\b\2\c\l\2\c\q\t\h\3\0\h\l\2\c\0\f\g\e\o\9\w\5\1\f\k\8\r\a\p\v\k\g\0\4\d\f\y\n\d\h\p\z\m\o\g\w\u\d\f\p\0\w\9\8\a\h\y\y\b\z\d\4\y\2\5\s\u\3\e\g\8\t\v\b\h\j\v\x\r\3\m\p\5\e\n\b\w\h\7\u\0\8\e\a\q\a\d\f\9\m\f\v\d\m\e\e\2\e\i\y\f\8\9\o\1\y\v\g\1\h\2\8\9\t\f\5\0\b\y\o\v\i\v\j\p\k\d\y\e\u\l\1\1\7\z\9\i\x\s\y\d\0\k\o\6\k\z\m\0\5\y\c\f\l\c\e\e\w\8\g\5\k\p\7\9\1\m\h\c\8\e\f\7\g\4\9\b\3\s\s\5\w\n\c\0\o\n\z\7\b\n\o\o\1\q ]] 00:40:36.100 09:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:36.100 09:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:36.100 [2024-12-09 09:52:43.534477] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:36.100 [2024-12-09 09:52:43.534589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60281 ] 00:40:36.359 [2024-12-09 09:52:43.687422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.359 [2024-12-09 09:52:43.721851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.359 [2024-12-09 09:52:43.752340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:36.359  [2024-12-09T09:52:44.119Z] Copying: 512/512 [B] (average 166 kBps) 00:40:36.617 00:40:36.617 09:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8sid6pm74o1ruoedu1eqkd4krerhziuya9m7n8gpiqgfwzgsrjb6hw7oxk1j4iaf821ujdsaz6hhglizrjapdc1qc8iymy0ix9qxgvz7ipq2qfduzxbfu4lkq9dos1ik3871a7qq9pct07bpt9q51voe5dutfze3gn35u8zx9j3tw45xcre89axj1rhe6ad4opcju2ua4a3mheaaakkogrzucmif0mnpkwvndjbhbqvu58xfgui4d080l5h0ajkspzy4qore3ykwok3i8zc3hlo1q0ih74pg926spr1gemtgapkn07b2cl2cqth30hl2c0fgeo9w51fk8rapvkg04dfyndhpzmogwudfp0w98ahyybzd4y25su3eg8tvbhjvxr3mp5enbwh7u08eaqadf9mfvdmee2eiyf89o1yvg1h289tf50byovivjpkdyeul117z9ixsyd0ko6kzm05ycflceew8g5kp791mhc8ef7g49b3ss5wnc0onz7bnoo1q == \8\s\i\d\6\p\m\7\4\o\1\r\u\o\e\d\u\1\e\q\k\d\4\k\r\e\r\h\z\i\u\y\a\9\m\7\n\8\g\p\i\q\g\f\w\z\g\s\r\j\b\6\h\w\7\o\x\k\1\j\4\i\a\f\8\2\1\u\j\d\s\a\z\6\h\h\g\l\i\z\r\j\a\p\d\c\1\q\c\8\i\y\m\y\0\i\x\9\q\x\g\v\z\7\i\p\q\2\q\f\d\u\z\x\b\f\u\4\l\k\q\9\d\o\s\1\i\k\3\8\7\1\a\7\q\q\9\p\c\t\0\7\b\p\t\9\q\5\1\v\o\e\5\d\u\t\f\z\e\3\g\n\3\5\u\8\z\x\9\j\3\t\w\4\5\x\c\r\e\8\9\a\x\j\1\r\h\e\6\a\d\4\o\p\c\j\u\2\u\a\4\a\3\m\h\e\a\a\a\k\k\o\g\r\z\u\c\m\i\f\0\m\n\p\k\w\v\n\d\j\b\h\b\q\v\u\5\8\x\f\g\u\i\4\d\0\8\0\l\5\h\0\a\j\k\s\p\z\y\4\q\o\r\e\3\y\k\w\o\k\3\i\8\z\c\3\h\l\o\1\q\0\i\h\7\4\p\g\9\2\6\s\p\r\1\g\e\m\t\g\a\p\k\n\0\7\b\2\c\l\2\c\q\t\h\3\0\h\l\2\c\0\f\g\e\o\9\w\5\1\f\k\8\r\a\p\v\k\g\0\4\d\f\y\n\d\h\p\z\m\o\g\w\u\d\f\p\0\w\9\8\a\h\y\y\b\z\d\4\y\2\5\s\u\3\e\g\8\t\v\b\h\j\v\x\r\3\m\p\5\e\n\b\w\h\7\u\0\8\e\a\q\a\d\f\9\m\f\v\d\m\e\e\2\e\i\y\f\8\9\o\1\y\v\g\1\h\2\8\9\t\f\5\0\b\y\o\v\i\v\j\p\k\d\y\e\u\l\1\1\7\z\9\i\x\s\y\d\0\k\o\6\k\z\m\0\5\y\c\f\l\c\e\e\w\8\g\5\k\p\7\9\1\m\h\c\8\e\f\7\g\4\9\b\3\s\s\5\w\n\c\0\o\n\z\7\b\n\o\o\1\q ]] 00:40:36.617 09:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:36.617 09:52:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:36.617 [2024-12-09 09:52:43.999815] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:36.617 [2024-12-09 09:52:43.999931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60290 ] 00:40:36.876 [2024-12-09 09:52:44.147712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.876 [2024-12-09 09:52:44.181176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.876 [2024-12-09 09:52:44.211327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:36.876  [2024-12-09T09:52:44.638Z] Copying: 512/512 [B] (average 500 kBps) 00:40:37.136 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8sid6pm74o1ruoedu1eqkd4krerhziuya9m7n8gpiqgfwzgsrjb6hw7oxk1j4iaf821ujdsaz6hhglizrjapdc1qc8iymy0ix9qxgvz7ipq2qfduzxbfu4lkq9dos1ik3871a7qq9pct07bpt9q51voe5dutfze3gn35u8zx9j3tw45xcre89axj1rhe6ad4opcju2ua4a3mheaaakkogrzucmif0mnpkwvndjbhbqvu58xfgui4d080l5h0ajkspzy4qore3ykwok3i8zc3hlo1q0ih74pg926spr1gemtgapkn07b2cl2cqth30hl2c0fgeo9w51fk8rapvkg04dfyndhpzmogwudfp0w98ahyybzd4y25su3eg8tvbhjvxr3mp5enbwh7u08eaqadf9mfvdmee2eiyf89o1yvg1h289tf50byovivjpkdyeul117z9ixsyd0ko6kzm05ycflceew8g5kp791mhc8ef7g49b3ss5wnc0onz7bnoo1q == \8\s\i\d\6\p\m\7\4\o\1\r\u\o\e\d\u\1\e\q\k\d\4\k\r\e\r\h\z\i\u\y\a\9\m\7\n\8\g\p\i\q\g\f\w\z\g\s\r\j\b\6\h\w\7\o\x\k\1\j\4\i\a\f\8\2\1\u\j\d\s\a\z\6\h\h\g\l\i\z\r\j\a\p\d\c\1\q\c\8\i\y\m\y\0\i\x\9\q\x\g\v\z\7\i\p\q\2\q\f\d\u\z\x\b\f\u\4\l\k\q\9\d\o\s\1\i\k\3\8\7\1\a\7\q\q\9\p\c\t\0\7\b\p\t\9\q\5\1\v\o\e\5\d\u\t\f\z\e\3\g\n\3\5\u\8\z\x\9\j\3\t\w\4\5\x\c\r\e\8\9\a\x\j\1\r\h\e\6\a\d\4\o\p\c\j\u\2\u\a\4\a\3\m\h\e\a\a\a\k\k\o\g\r\z\u\c\m\i\f\0\m\n\p\k\w\v\n\d\j\b\h\b\q\v\u\5\8\x\f\g\u\i\4\d\0\8\0\l\5\h\0\a\j\k\s\p\z\y\4\q\o\r\e\3\y\k\w\o\k\3\i\8\z\c\3\h\l\o\1\q\0\i\h\7\4\p\g\9\2\6\s\p\r\1\g\e\m\t\g\a\p\k\n\0\7\b\2\c\l\2\c\q\t\h\3\0\h\l\2\c\0\f\g\e\o\9\w\5\1\f\k\8\r\a\p\v\k\g\0\4\d\f\y\n\d\h\p\z\m\o\g\w\u\d\f\p\0\w\9\8\a\h\y\y\b\z\d\4\y\2\5\s\u\3\e\g\8\t\v\b\h\j\v\x\r\3\m\p\5\e\n\b\w\h\7\u\0\8\e\a\q\a\d\f\9\m\f\v\d\m\e\e\2\e\i\y\f\8\9\o\1\y\v\g\1\h\2\8\9\t\f\5\0\b\y\o\v\i\v\j\p\k\d\y\e\u\l\1\1\7\z\9\i\x\s\y\d\0\k\o\6\k\z\m\0\5\y\c\f\l\c\e\e\w\8\g\5\k\p\7\9\1\m\h\c\8\e\f\7\g\4\9\b\3\s\s\5\w\n\c\0\o\n\z\7\b\n\o\o\1\q ]] 00:40:37.136 00:40:37.136 real 0m3.843s 00:40:37.136 user 0m2.193s 00:40:37.136 sys 0m1.468s 00:40:37.136 ************************************ 00:40:37.136 END TEST dd_flags_misc 00:40:37.136 ************************************ 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:40:37.136 * Second test run, disabling liburing, forcing AIO 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:37.136 ************************************ 00:40:37.136 START TEST dd_flag_append_forced_aio 00:40:37.136 ************************************ 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=1htwiphaxceqo6wtry7nqcgpmilcyws8 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=xc0n8nkdfdwp3v38wcad7surz5uudsno 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 1htwiphaxceqo6wtry7nqcgpmilcyws8 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s xc0n8nkdfdwp3v38wcad7surz5uudsno 00:40:37.136 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:40:37.136 [2024-12-09 09:52:44.523210] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:37.136 [2024-12-09 09:52:44.523303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60320 ] 00:40:37.395 [2024-12-09 09:52:44.679360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.395 [2024-12-09 09:52:44.718086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.395 [2024-12-09 09:52:44.750707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:37.395  [2024-12-09T09:52:45.156Z] Copying: 32/32 [B] (average 31 kBps) 00:40:37.654 00:40:37.654 ************************************ 00:40:37.654 END TEST dd_flag_append_forced_aio 00:40:37.654 ************************************ 00:40:37.654 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ xc0n8nkdfdwp3v38wcad7surz5uudsno1htwiphaxceqo6wtry7nqcgpmilcyws8 == \x\c\0\n\8\n\k\d\f\d\w\p\3\v\3\8\w\c\a\d\7\s\u\r\z\5\u\u\d\s\n\o\1\h\t\w\i\p\h\a\x\c\e\q\o\6\w\t\r\y\7\n\q\c\g\p\m\i\l\c\y\w\s\8 ]] 00:40:37.654 00:40:37.654 real 0m0.527s 00:40:37.654 user 0m0.295s 00:40:37.654 sys 0m0.106s 00:40:37.654 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:37.654 09:52:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:37.654 ************************************ 00:40:37.654 START TEST dd_flag_directory_forced_aio 00:40:37.654 ************************************ 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:37.654 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:37.655 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:37.655 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:37.655 [2024-12-09 09:52:45.097354] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:37.655 [2024-12-09 09:52:45.097484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60346 ] 00:40:37.914 [2024-12-09 09:52:45.254750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.914 [2024-12-09 09:52:45.294806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.914 [2024-12-09 09:52:45.328225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:37.914 [2024-12-09 09:52:45.351929] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:37.914 [2024-12-09 09:52:45.352032] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:37.914 [2024-12-09 09:52:45.352063] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:38.172 [2024-12-09 09:52:45.430463] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:38.172 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:38.173 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:38.173 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:38.173 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:38.173 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:38.173 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:38.173 09:52:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:38.173 [2024-12-09 09:52:45.606923] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:38.173 [2024-12-09 09:52:45.607023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60356 ] 00:40:38.431 [2024-12-09 09:52:45.761586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.431 [2024-12-09 09:52:45.793112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:38.431 [2024-12-09 09:52:45.825746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:38.431 [2024-12-09 09:52:45.846094] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:38.431 [2024-12-09 09:52:45.846150] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:38.431 [2024-12-09 09:52:45.846164] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:38.431 [2024-12-09 09:52:45.917101] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:38.690 00:40:38.690 real 0m0.992s 00:40:38.690 user 0m0.581s 00:40:38.690 sys 0m0.200s 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:38.690 ************************************ 00:40:38.690 END TEST dd_flag_directory_forced_aio 00:40:38.690 ************************************ 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:38.690 ************************************ 00:40:38.690 START TEST dd_flag_nofollow_forced_aio 00:40:38.690 ************************************ 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:38.690 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:38.690 [2024-12-09 09:52:46.148189] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:38.690 [2024-12-09 09:52:46.148293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ] 00:40:38.949 [2024-12-09 09:52:46.302672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.949 [2024-12-09 09:52:46.344519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:38.949 [2024-12-09 09:52:46.377383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:38.949 [2024-12-09 09:52:46.399044] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:38.949 [2024-12-09 09:52:46.399109] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:38.949 [2024-12-09 09:52:46.399128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:39.209 [2024-12-09 09:52:46.467732] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:39.209 09:52:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:39.209 [2024-12-09 09:52:46.643823] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:39.209 [2024-12-09 09:52:46.643942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60394 ] 00:40:39.469 [2024-12-09 09:52:46.792485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.469 [2024-12-09 09:52:46.822385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.469 [2024-12-09 09:52:46.852766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:39.469 [2024-12-09 09:52:46.872328] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:39.469 [2024-12-09 09:52:46.872394] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:39.469 [2024-12-09 09:52:46.872424] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:39.469 [2024-12-09 09:52:46.934707] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:39.729 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:39.729 [2024-12-09 09:52:47.099590] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:39.729 [2024-12-09 09:52:47.099720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60401 ] 00:40:39.988 [2024-12-09 09:52:47.245667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.988 [2024-12-09 09:52:47.277603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.988 [2024-12-09 09:52:47.307447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:39.988  [2024-12-09T09:52:47.750Z] Copying: 512/512 [B] (average 500 kBps) 00:40:40.248 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ qoqxw0gebe3qea38z730z1dsvcxuhasuojk15ujwtwihork15gchs4a8y300re0oisbuwxpykgnvzco7ynkq0u8ht291t2jfxnqzx69joj8jtgbcb005z5e34wl02hanmd3214wt11xadj4vwld2ejkhf1nv7k723lxwqhb97sd5bj04i9imit54tpu6fklfzlxlcsbnp4tnaxy3ark7oi97pbyvoc88qe7yxplkx0f2ymhe0ryruo6k0vx5uios9qlon6afyeqcbha3piek4k2lix2byrrsf30aa0er511t3ttdqfu38h956970w46r6wwqbhk490e0urlp0aijbqau4i2cg92i9jyrxc9ah55k14muw773t8xvm2fruw81k18ygdlfo1acsbxww2r3tmqzdfumjvwzf3spvd307xqarry7u3583mh9s3my3ci6282egis9hfupt52pogz7ivm6rf8tee1fdk7t80upojgdy94iy379g8gzqv9utjfa == \q\o\q\x\w\0\g\e\b\e\3\q\e\a\3\8\z\7\3\0\z\1\d\s\v\c\x\u\h\a\s\u\o\j\k\1\5\u\j\w\t\w\i\h\o\r\k\1\5\g\c\h\s\4\a\8\y\3\0\0\r\e\0\o\i\s\b\u\w\x\p\y\k\g\n\v\z\c\o\7\y\n\k\q\0\u\8\h\t\2\9\1\t\2\j\f\x\n\q\z\x\6\9\j\o\j\8\j\t\g\b\c\b\0\0\5\z\5\e\3\4\w\l\0\2\h\a\n\m\d\3\2\1\4\w\t\1\1\x\a\d\j\4\v\w\l\d\2\e\j\k\h\f\1\n\v\7\k\7\2\3\l\x\w\q\h\b\9\7\s\d\5\b\j\0\4\i\9\i\m\i\t\5\4\t\p\u\6\f\k\l\f\z\l\x\l\c\s\b\n\p\4\t\n\a\x\y\3\a\r\k\7\o\i\9\7\p\b\y\v\o\c\8\8\q\e\7\y\x\p\l\k\x\0\f\2\y\m\h\e\0\r\y\r\u\o\6\k\0\v\x\5\u\i\o\s\9\q\l\o\n\6\a\f\y\e\q\c\b\h\a\3\p\i\e\k\4\k\2\l\i\x\2\b\y\r\r\s\f\3\0\a\a\0\e\r\5\1\1\t\3\t\t\d\q\f\u\3\8\h\9\5\6\9\7\0\w\4\6\r\6\w\w\q\b\h\k\4\9\0\e\0\u\r\l\p\0\a\i\j\b\q\a\u\4\i\2\c\g\9\2\i\9\j\y\r\x\c\9\a\h\5\5\k\1\4\m\u\w\7\7\3\t\8\x\v\m\2\f\r\u\w\8\1\k\1\8\y\g\d\l\f\o\1\a\c\s\b\x\w\w\2\r\3\t\m\q\z\d\f\u\m\j\v\w\z\f\3\s\p\v\d\3\0\7\x\q\a\r\r\y\7\u\3\5\8\3\m\h\9\s\3\m\y\3\c\i\6\2\8\2\e\g\i\s\9\h\f\u\p\t\5\2\p\o\g\z\7\i\v\m\6\r\f\8\t\e\e\1\f\d\k\7\t\8\0\u\p\o\j\g\d\y\9\4\i\y\3\7\9\g\8\g\z\q\v\9\u\t\j\f\a ]] 00:40:40.248 00:40:40.248 real 0m1.434s 00:40:40.248 user 0m0.800s 00:40:40.248 sys 0m0.304s 00:40:40.248 ************************************ 00:40:40.248 END TEST dd_flag_nofollow_forced_aio 00:40:40.248 ************************************ 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:40.248 ************************************ 00:40:40.248 START TEST dd_flag_noatime_forced_aio 00:40:40.248 ************************************ 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733737967 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733737967 00:40:40.248 09:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:40:41.187 09:52:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:41.187 [2024-12-09 09:52:48.650263] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:41.187 [2024-12-09 09:52:48.650404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60442 ] 00:40:41.446 [2024-12-09 09:52:48.807290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:41.446 [2024-12-09 09:52:48.847331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.446 [2024-12-09 09:52:48.880990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:41.446  [2024-12-09T09:52:49.207Z] Copying: 512/512 [B] (average 500 kBps) 00:40:41.705 00:40:41.705 09:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:41.705 09:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733737967 )) 00:40:41.705 09:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:41.705 09:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733737967 )) 00:40:41.705 09:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:41.705 [2024-12-09 09:52:49.188552] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:41.705 [2024-12-09 09:52:49.188663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60454 ] 00:40:41.964 [2024-12-09 09:52:49.346380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:41.965 [2024-12-09 09:52:49.386902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.965 [2024-12-09 09:52:49.422941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:41.965  [2024-12-09T09:52:49.726Z] Copying: 512/512 [B] (average 500 kBps) 00:40:42.224 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733737969 )) 00:40:42.224 00:40:42.224 real 0m2.082s 00:40:42.224 user 0m0.608s 00:40:42.224 sys 0m0.230s 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:42.224 ************************************ 00:40:42.224 END TEST dd_flag_noatime_forced_aio 00:40:42.224 ************************************ 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:42.224 ************************************ 00:40:42.224 START TEST dd_flags_misc_forced_aio 00:40:42.224 ************************************ 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:42.224 09:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:42.483 [2024-12-09 09:52:49.779246] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:42.484 [2024-12-09 09:52:49.779349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60483 ] 00:40:42.484 [2024-12-09 09:52:49.937222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.484 [2024-12-09 09:52:49.977722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.743 [2024-12-09 09:52:50.011680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:42.743  [2024-12-09T09:52:50.504Z] Copying: 512/512 [B] (average 500 kBps) 00:40:43.002 00:40:43.002 09:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 44toepawlox300y85sahkeigkav7x3ogusyjr0w0dpwvp1l1guv999w1tl36n1zu4vij9xhp9xjabwekoxftjvel4az4ljlt8hc9aqn7rvx1m9p0byg8rla7i63bwx8rqp79jae633s9d8cvc7pkdo4724s5ride8dh3eaztjqtlkl1x1tlod02escbmzw131imlohs31v1ke02fs8xrbiqk9d786lwz4wfvav3m9y2qtg00ast84u9mpb15fr3kpzvl73mkkanlifi5lagli14cui4pv9z9wxqp5lclaghy2uh7faxiu95b8p311ptugb5j3a4763h0r1x66vofvtzulgbmcwtsrt85h6ptkp0m7iv9ctz9ivhhyijhbsniku7b58e79ioiuoh145nygrdhwvuobzmv5oujbsq77scp706v2rr1mn9thli0c8rj4c4jz0lrc04ceqe3upgxk4fjeb3fled90f0niuikjs51di53zxf14mvz16bai3r9 == \4\4\t\o\e\p\a\w\l\o\x\3\0\0\y\8\5\s\a\h\k\e\i\g\k\a\v\7\x\3\o\g\u\s\y\j\r\0\w\0\d\p\w\v\p\1\l\1\g\u\v\9\9\9\w\1\t\l\3\6\n\1\z\u\4\v\i\j\9\x\h\p\9\x\j\a\b\w\e\k\o\x\f\t\j\v\e\l\4\a\z\4\l\j\l\t\8\h\c\9\a\q\n\7\r\v\x\1\m\9\p\0\b\y\g\8\r\l\a\7\i\6\3\b\w\x\8\r\q\p\7\9\j\a\e\6\3\3\s\9\d\8\c\v\c\7\p\k\d\o\4\7\2\4\s\5\r\i\d\e\8\d\h\3\e\a\z\t\j\q\t\l\k\l\1\x\1\t\l\o\d\0\2\e\s\c\b\m\z\w\1\3\1\i\m\l\o\h\s\3\1\v\1\k\e\0\2\f\s\8\x\r\b\i\q\k\9\d\7\8\6\l\w\z\4\w\f\v\a\v\3\m\9\y\2\q\t\g\0\0\a\s\t\8\4\u\9\m\p\b\1\5\f\r\3\k\p\z\v\l\7\3\m\k\k\a\n\l\i\f\i\5\l\a\g\l\i\1\4\c\u\i\4\p\v\9\z\9\w\x\q\p\5\l\c\l\a\g\h\y\2\u\h\7\f\a\x\i\u\9\5\b\8\p\3\1\1\p\t\u\g\b\5\j\3\a\4\7\6\3\h\0\r\1\x\6\6\v\o\f\v\t\z\u\l\g\b\m\c\w\t\s\r\t\8\5\h\6\p\t\k\p\0\m\7\i\v\9\c\t\z\9\i\v\h\h\y\i\j\h\b\s\n\i\k\u\7\b\5\8\e\7\9\i\o\i\u\o\h\1\4\5\n\y\g\r\d\h\w\v\u\o\b\z\m\v\5\o\u\j\b\s\q\7\7\s\c\p\7\0\6\v\2\r\r\1\m\n\9\t\h\l\i\0\c\8\r\j\4\c\4\j\z\0\l\r\c\0\4\c\e\q\e\3\u\p\g\x\k\4\f\j\e\b\3\f\l\e\d\9\0\f\0\n\i\u\i\k\j\s\5\1\d\i\5\3\z\x\f\1\4\m\v\z\1\6\b\a\i\3\r\9 ]] 00:40:43.002 09:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:43.002 09:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:43.002 [2024-12-09 09:52:50.325181] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:43.002 [2024-12-09 09:52:50.325300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60490 ] 00:40:43.002 [2024-12-09 09:52:50.483099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.261 [2024-12-09 09:52:50.523636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:43.261 [2024-12-09 09:52:50.559508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:43.261  [2024-12-09T09:52:51.021Z] Copying: 512/512 [B] (average 500 kBps) 00:40:43.519 00:40:43.519 09:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 44toepawlox300y85sahkeigkav7x3ogusyjr0w0dpwvp1l1guv999w1tl36n1zu4vij9xhp9xjabwekoxftjvel4az4ljlt8hc9aqn7rvx1m9p0byg8rla7i63bwx8rqp79jae633s9d8cvc7pkdo4724s5ride8dh3eaztjqtlkl1x1tlod02escbmzw131imlohs31v1ke02fs8xrbiqk9d786lwz4wfvav3m9y2qtg00ast84u9mpb15fr3kpzvl73mkkanlifi5lagli14cui4pv9z9wxqp5lclaghy2uh7faxiu95b8p311ptugb5j3a4763h0r1x66vofvtzulgbmcwtsrt85h6ptkp0m7iv9ctz9ivhhyijhbsniku7b58e79ioiuoh145nygrdhwvuobzmv5oujbsq77scp706v2rr1mn9thli0c8rj4c4jz0lrc04ceqe3upgxk4fjeb3fled90f0niuikjs51di53zxf14mvz16bai3r9 == \4\4\t\o\e\p\a\w\l\o\x\3\0\0\y\8\5\s\a\h\k\e\i\g\k\a\v\7\x\3\o\g\u\s\y\j\r\0\w\0\d\p\w\v\p\1\l\1\g\u\v\9\9\9\w\1\t\l\3\6\n\1\z\u\4\v\i\j\9\x\h\p\9\x\j\a\b\w\e\k\o\x\f\t\j\v\e\l\4\a\z\4\l\j\l\t\8\h\c\9\a\q\n\7\r\v\x\1\m\9\p\0\b\y\g\8\r\l\a\7\i\6\3\b\w\x\8\r\q\p\7\9\j\a\e\6\3\3\s\9\d\8\c\v\c\7\p\k\d\o\4\7\2\4\s\5\r\i\d\e\8\d\h\3\e\a\z\t\j\q\t\l\k\l\1\x\1\t\l\o\d\0\2\e\s\c\b\m\z\w\1\3\1\i\m\l\o\h\s\3\1\v\1\k\e\0\2\f\s\8\x\r\b\i\q\k\9\d\7\8\6\l\w\z\4\w\f\v\a\v\3\m\9\y\2\q\t\g\0\0\a\s\t\8\4\u\9\m\p\b\1\5\f\r\3\k\p\z\v\l\7\3\m\k\k\a\n\l\i\f\i\5\l\a\g\l\i\1\4\c\u\i\4\p\v\9\z\9\w\x\q\p\5\l\c\l\a\g\h\y\2\u\h\7\f\a\x\i\u\9\5\b\8\p\3\1\1\p\t\u\g\b\5\j\3\a\4\7\6\3\h\0\r\1\x\6\6\v\o\f\v\t\z\u\l\g\b\m\c\w\t\s\r\t\8\5\h\6\p\t\k\p\0\m\7\i\v\9\c\t\z\9\i\v\h\h\y\i\j\h\b\s\n\i\k\u\7\b\5\8\e\7\9\i\o\i\u\o\h\1\4\5\n\y\g\r\d\h\w\v\u\o\b\z\m\v\5\o\u\j\b\s\q\7\7\s\c\p\7\0\6\v\2\r\r\1\m\n\9\t\h\l\i\0\c\8\r\j\4\c\4\j\z\0\l\r\c\0\4\c\e\q\e\3\u\p\g\x\k\4\f\j\e\b\3\f\l\e\d\9\0\f\0\n\i\u\i\k\j\s\5\1\d\i\5\3\z\x\f\1\4\m\v\z\1\6\b\a\i\3\r\9 ]] 00:40:43.519 09:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:43.519 09:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:43.519 [2024-12-09 09:52:50.867774] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:43.519 [2024-12-09 09:52:50.867871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60498 ] 00:40:43.780 [2024-12-09 09:52:51.023856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.780 [2024-12-09 09:52:51.064918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:43.780 [2024-12-09 09:52:51.099509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:43.780  [2024-12-09T09:52:51.541Z] Copying: 512/512 [B] (average 100 kBps) 00:40:44.039 00:40:44.039 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 44toepawlox300y85sahkeigkav7x3ogusyjr0w0dpwvp1l1guv999w1tl36n1zu4vij9xhp9xjabwekoxftjvel4az4ljlt8hc9aqn7rvx1m9p0byg8rla7i63bwx8rqp79jae633s9d8cvc7pkdo4724s5ride8dh3eaztjqtlkl1x1tlod02escbmzw131imlohs31v1ke02fs8xrbiqk9d786lwz4wfvav3m9y2qtg00ast84u9mpb15fr3kpzvl73mkkanlifi5lagli14cui4pv9z9wxqp5lclaghy2uh7faxiu95b8p311ptugb5j3a4763h0r1x66vofvtzulgbmcwtsrt85h6ptkp0m7iv9ctz9ivhhyijhbsniku7b58e79ioiuoh145nygrdhwvuobzmv5oujbsq77scp706v2rr1mn9thli0c8rj4c4jz0lrc04ceqe3upgxk4fjeb3fled90f0niuikjs51di53zxf14mvz16bai3r9 == \4\4\t\o\e\p\a\w\l\o\x\3\0\0\y\8\5\s\a\h\k\e\i\g\k\a\v\7\x\3\o\g\u\s\y\j\r\0\w\0\d\p\w\v\p\1\l\1\g\u\v\9\9\9\w\1\t\l\3\6\n\1\z\u\4\v\i\j\9\x\h\p\9\x\j\a\b\w\e\k\o\x\f\t\j\v\e\l\4\a\z\4\l\j\l\t\8\h\c\9\a\q\n\7\r\v\x\1\m\9\p\0\b\y\g\8\r\l\a\7\i\6\3\b\w\x\8\r\q\p\7\9\j\a\e\6\3\3\s\9\d\8\c\v\c\7\p\k\d\o\4\7\2\4\s\5\r\i\d\e\8\d\h\3\e\a\z\t\j\q\t\l\k\l\1\x\1\t\l\o\d\0\2\e\s\c\b\m\z\w\1\3\1\i\m\l\o\h\s\3\1\v\1\k\e\0\2\f\s\8\x\r\b\i\q\k\9\d\7\8\6\l\w\z\4\w\f\v\a\v\3\m\9\y\2\q\t\g\0\0\a\s\t\8\4\u\9\m\p\b\1\5\f\r\3\k\p\z\v\l\7\3\m\k\k\a\n\l\i\f\i\5\l\a\g\l\i\1\4\c\u\i\4\p\v\9\z\9\w\x\q\p\5\l\c\l\a\g\h\y\2\u\h\7\f\a\x\i\u\9\5\b\8\p\3\1\1\p\t\u\g\b\5\j\3\a\4\7\6\3\h\0\r\1\x\6\6\v\o\f\v\t\z\u\l\g\b\m\c\w\t\s\r\t\8\5\h\6\p\t\k\p\0\m\7\i\v\9\c\t\z\9\i\v\h\h\y\i\j\h\b\s\n\i\k\u\7\b\5\8\e\7\9\i\o\i\u\o\h\1\4\5\n\y\g\r\d\h\w\v\u\o\b\z\m\v\5\o\u\j\b\s\q\7\7\s\c\p\7\0\6\v\2\r\r\1\m\n\9\t\h\l\i\0\c\8\r\j\4\c\4\j\z\0\l\r\c\0\4\c\e\q\e\3\u\p\g\x\k\4\f\j\e\b\3\f\l\e\d\9\0\f\0\n\i\u\i\k\j\s\5\1\d\i\5\3\z\x\f\1\4\m\v\z\1\6\b\a\i\3\r\9 ]] 00:40:44.039 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:44.039 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:44.039 [2024-12-09 09:52:51.386732] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:44.039 [2024-12-09 09:52:51.386832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60505 ] 00:40:44.039 [2024-12-09 09:52:51.535347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:44.297 [2024-12-09 09:52:51.575334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.297 [2024-12-09 09:52:51.609771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:44.297  [2024-12-09T09:52:52.059Z] Copying: 512/512 [B] (average 500 kBps) 00:40:44.557 00:40:44.557 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 44toepawlox300y85sahkeigkav7x3ogusyjr0w0dpwvp1l1guv999w1tl36n1zu4vij9xhp9xjabwekoxftjvel4az4ljlt8hc9aqn7rvx1m9p0byg8rla7i63bwx8rqp79jae633s9d8cvc7pkdo4724s5ride8dh3eaztjqtlkl1x1tlod02escbmzw131imlohs31v1ke02fs8xrbiqk9d786lwz4wfvav3m9y2qtg00ast84u9mpb15fr3kpzvl73mkkanlifi5lagli14cui4pv9z9wxqp5lclaghy2uh7faxiu95b8p311ptugb5j3a4763h0r1x66vofvtzulgbmcwtsrt85h6ptkp0m7iv9ctz9ivhhyijhbsniku7b58e79ioiuoh145nygrdhwvuobzmv5oujbsq77scp706v2rr1mn9thli0c8rj4c4jz0lrc04ceqe3upgxk4fjeb3fled90f0niuikjs51di53zxf14mvz16bai3r9 == \4\4\t\o\e\p\a\w\l\o\x\3\0\0\y\8\5\s\a\h\k\e\i\g\k\a\v\7\x\3\o\g\u\s\y\j\r\0\w\0\d\p\w\v\p\1\l\1\g\u\v\9\9\9\w\1\t\l\3\6\n\1\z\u\4\v\i\j\9\x\h\p\9\x\j\a\b\w\e\k\o\x\f\t\j\v\e\l\4\a\z\4\l\j\l\t\8\h\c\9\a\q\n\7\r\v\x\1\m\9\p\0\b\y\g\8\r\l\a\7\i\6\3\b\w\x\8\r\q\p\7\9\j\a\e\6\3\3\s\9\d\8\c\v\c\7\p\k\d\o\4\7\2\4\s\5\r\i\d\e\8\d\h\3\e\a\z\t\j\q\t\l\k\l\1\x\1\t\l\o\d\0\2\e\s\c\b\m\z\w\1\3\1\i\m\l\o\h\s\3\1\v\1\k\e\0\2\f\s\8\x\r\b\i\q\k\9\d\7\8\6\l\w\z\4\w\f\v\a\v\3\m\9\y\2\q\t\g\0\0\a\s\t\8\4\u\9\m\p\b\1\5\f\r\3\k\p\z\v\l\7\3\m\k\k\a\n\l\i\f\i\5\l\a\g\l\i\1\4\c\u\i\4\p\v\9\z\9\w\x\q\p\5\l\c\l\a\g\h\y\2\u\h\7\f\a\x\i\u\9\5\b\8\p\3\1\1\p\t\u\g\b\5\j\3\a\4\7\6\3\h\0\r\1\x\6\6\v\o\f\v\t\z\u\l\g\b\m\c\w\t\s\r\t\8\5\h\6\p\t\k\p\0\m\7\i\v\9\c\t\z\9\i\v\h\h\y\i\j\h\b\s\n\i\k\u\7\b\5\8\e\7\9\i\o\i\u\o\h\1\4\5\n\y\g\r\d\h\w\v\u\o\b\z\m\v\5\o\u\j\b\s\q\7\7\s\c\p\7\0\6\v\2\r\r\1\m\n\9\t\h\l\i\0\c\8\r\j\4\c\4\j\z\0\l\r\c\0\4\c\e\q\e\3\u\p\g\x\k\4\f\j\e\b\3\f\l\e\d\9\0\f\0\n\i\u\i\k\j\s\5\1\d\i\5\3\z\x\f\1\4\m\v\z\1\6\b\a\i\3\r\9 ]] 00:40:44.557 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:44.557 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:40:44.557 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:44.557 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:44.557 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:44.557 09:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:44.557 [2024-12-09 09:52:51.914290] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:44.557 [2024-12-09 09:52:51.914399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60513 ] 00:40:44.816 [2024-12-09 09:52:52.107745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:44.816 [2024-12-09 09:52:52.147848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.816 [2024-12-09 09:52:52.180068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:44.816  [2024-12-09T09:52:52.577Z] Copying: 512/512 [B] (average 500 kBps) 00:40:45.075 00:40:45.075 09:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ g87wtorf0v19suu9pfdnq8gfc6c64mhf0qc27hxol6re7ig7lxhjsag02qawk2photl9k6wo9z9vbzcj5unb3ktm1dvmmg8oyx53spk8uh1n1pzx52fjlyiqhgnn71iqsueml8dyqpgkxqsw8st4hd4qbnino3gym9lxh1r32c401pddqbtrryc9jdz2318ytjjf4s64d8yyvyaq1jo9h0os3vwtiww3lvza7h9spai8s7p3bjrmqk8bzd6tqv96ckiog08xvd5amdjoetzfvpgdvwtatce5v1ksyv1a8jw3e9oduphw5rtjp0u7pity5lixlxi37ayjdy04yvj6cy4x1y0xl8lpunewkvudpo91gp5a1v6zsgwxn6q1uhlypo5lpnf6s6gre13lddyswqelj0goxwjgiwjdguj06mfravl9q51f8uia5ts34xncayaaneyyjzel9b9uubk36q6plk8t6o8knsp7ivuly0p6x14hqmyqiidzjwcfl11h == \g\8\7\w\t\o\r\f\0\v\1\9\s\u\u\9\p\f\d\n\q\8\g\f\c\6\c\6\4\m\h\f\0\q\c\2\7\h\x\o\l\6\r\e\7\i\g\7\l\x\h\j\s\a\g\0\2\q\a\w\k\2\p\h\o\t\l\9\k\6\w\o\9\z\9\v\b\z\c\j\5\u\n\b\3\k\t\m\1\d\v\m\m\g\8\o\y\x\5\3\s\p\k\8\u\h\1\n\1\p\z\x\5\2\f\j\l\y\i\q\h\g\n\n\7\1\i\q\s\u\e\m\l\8\d\y\q\p\g\k\x\q\s\w\8\s\t\4\h\d\4\q\b\n\i\n\o\3\g\y\m\9\l\x\h\1\r\3\2\c\4\0\1\p\d\d\q\b\t\r\r\y\c\9\j\d\z\2\3\1\8\y\t\j\j\f\4\s\6\4\d\8\y\y\v\y\a\q\1\j\o\9\h\0\o\s\3\v\w\t\i\w\w\3\l\v\z\a\7\h\9\s\p\a\i\8\s\7\p\3\b\j\r\m\q\k\8\b\z\d\6\t\q\v\9\6\c\k\i\o\g\0\8\x\v\d\5\a\m\d\j\o\e\t\z\f\v\p\g\d\v\w\t\a\t\c\e\5\v\1\k\s\y\v\1\a\8\j\w\3\e\9\o\d\u\p\h\w\5\r\t\j\p\0\u\7\p\i\t\y\5\l\i\x\l\x\i\3\7\a\y\j\d\y\0\4\y\v\j\6\c\y\4\x\1\y\0\x\l\8\l\p\u\n\e\w\k\v\u\d\p\o\9\1\g\p\5\a\1\v\6\z\s\g\w\x\n\6\q\1\u\h\l\y\p\o\5\l\p\n\f\6\s\6\g\r\e\1\3\l\d\d\y\s\w\q\e\l\j\0\g\o\x\w\j\g\i\w\j\d\g\u\j\0\6\m\f\r\a\v\l\9\q\5\1\f\8\u\i\a\5\t\s\3\4\x\n\c\a\y\a\a\n\e\y\y\j\z\e\l\9\b\9\u\u\b\k\3\6\q\6\p\l\k\8\t\6\o\8\k\n\s\p\7\i\v\u\l\y\0\p\6\x\1\4\h\q\m\y\q\i\i\d\z\j\w\c\f\l\1\1\h ]] 00:40:45.075 09:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:45.075 09:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:45.075 [2024-12-09 09:52:52.464663] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:45.075 [2024-12-09 09:52:52.464769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60520 ] 00:40:45.333 [2024-12-09 09:52:52.622612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.333 [2024-12-09 09:52:52.661801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.333 [2024-12-09 09:52:52.694486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:45.333  [2024-12-09T09:52:53.093Z] Copying: 512/512 [B] (average 500 kBps) 00:40:45.591 00:40:45.592 09:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ g87wtorf0v19suu9pfdnq8gfc6c64mhf0qc27hxol6re7ig7lxhjsag02qawk2photl9k6wo9z9vbzcj5unb3ktm1dvmmg8oyx53spk8uh1n1pzx52fjlyiqhgnn71iqsueml8dyqpgkxqsw8st4hd4qbnino3gym9lxh1r32c401pddqbtrryc9jdz2318ytjjf4s64d8yyvyaq1jo9h0os3vwtiww3lvza7h9spai8s7p3bjrmqk8bzd6tqv96ckiog08xvd5amdjoetzfvpgdvwtatce5v1ksyv1a8jw3e9oduphw5rtjp0u7pity5lixlxi37ayjdy04yvj6cy4x1y0xl8lpunewkvudpo91gp5a1v6zsgwxn6q1uhlypo5lpnf6s6gre13lddyswqelj0goxwjgiwjdguj06mfravl9q51f8uia5ts34xncayaaneyyjzel9b9uubk36q6plk8t6o8knsp7ivuly0p6x14hqmyqiidzjwcfl11h == \g\8\7\w\t\o\r\f\0\v\1\9\s\u\u\9\p\f\d\n\q\8\g\f\c\6\c\6\4\m\h\f\0\q\c\2\7\h\x\o\l\6\r\e\7\i\g\7\l\x\h\j\s\a\g\0\2\q\a\w\k\2\p\h\o\t\l\9\k\6\w\o\9\z\9\v\b\z\c\j\5\u\n\b\3\k\t\m\1\d\v\m\m\g\8\o\y\x\5\3\s\p\k\8\u\h\1\n\1\p\z\x\5\2\f\j\l\y\i\q\h\g\n\n\7\1\i\q\s\u\e\m\l\8\d\y\q\p\g\k\x\q\s\w\8\s\t\4\h\d\4\q\b\n\i\n\o\3\g\y\m\9\l\x\h\1\r\3\2\c\4\0\1\p\d\d\q\b\t\r\r\y\c\9\j\d\z\2\3\1\8\y\t\j\j\f\4\s\6\4\d\8\y\y\v\y\a\q\1\j\o\9\h\0\o\s\3\v\w\t\i\w\w\3\l\v\z\a\7\h\9\s\p\a\i\8\s\7\p\3\b\j\r\m\q\k\8\b\z\d\6\t\q\v\9\6\c\k\i\o\g\0\8\x\v\d\5\a\m\d\j\o\e\t\z\f\v\p\g\d\v\w\t\a\t\c\e\5\v\1\k\s\y\v\1\a\8\j\w\3\e\9\o\d\u\p\h\w\5\r\t\j\p\0\u\7\p\i\t\y\5\l\i\x\l\x\i\3\7\a\y\j\d\y\0\4\y\v\j\6\c\y\4\x\1\y\0\x\l\8\l\p\u\n\e\w\k\v\u\d\p\o\9\1\g\p\5\a\1\v\6\z\s\g\w\x\n\6\q\1\u\h\l\y\p\o\5\l\p\n\f\6\s\6\g\r\e\1\3\l\d\d\y\s\w\q\e\l\j\0\g\o\x\w\j\g\i\w\j\d\g\u\j\0\6\m\f\r\a\v\l\9\q\5\1\f\8\u\i\a\5\t\s\3\4\x\n\c\a\y\a\a\n\e\y\y\j\z\e\l\9\b\9\u\u\b\k\3\6\q\6\p\l\k\8\t\6\o\8\k\n\s\p\7\i\v\u\l\y\0\p\6\x\1\4\h\q\m\y\q\i\i\d\z\j\w\c\f\l\1\1\h ]] 00:40:45.592 09:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:45.592 09:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:45.592 [2024-12-09 09:52:52.992493] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:45.592 [2024-12-09 09:52:52.992607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60532 ] 00:40:45.850 [2024-12-09 09:52:53.146745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.850 [2024-12-09 09:52:53.186978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.850 [2024-12-09 09:52:53.220396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:45.850  [2024-12-09T09:52:53.610Z] Copying: 512/512 [B] (average 500 kBps) 00:40:46.108 00:40:46.108 09:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ g87wtorf0v19suu9pfdnq8gfc6c64mhf0qc27hxol6re7ig7lxhjsag02qawk2photl9k6wo9z9vbzcj5unb3ktm1dvmmg8oyx53spk8uh1n1pzx52fjlyiqhgnn71iqsueml8dyqpgkxqsw8st4hd4qbnino3gym9lxh1r32c401pddqbtrryc9jdz2318ytjjf4s64d8yyvyaq1jo9h0os3vwtiww3lvza7h9spai8s7p3bjrmqk8bzd6tqv96ckiog08xvd5amdjoetzfvpgdvwtatce5v1ksyv1a8jw3e9oduphw5rtjp0u7pity5lixlxi37ayjdy04yvj6cy4x1y0xl8lpunewkvudpo91gp5a1v6zsgwxn6q1uhlypo5lpnf6s6gre13lddyswqelj0goxwjgiwjdguj06mfravl9q51f8uia5ts34xncayaaneyyjzel9b9uubk36q6plk8t6o8knsp7ivuly0p6x14hqmyqiidzjwcfl11h == \g\8\7\w\t\o\r\f\0\v\1\9\s\u\u\9\p\f\d\n\q\8\g\f\c\6\c\6\4\m\h\f\0\q\c\2\7\h\x\o\l\6\r\e\7\i\g\7\l\x\h\j\s\a\g\0\2\q\a\w\k\2\p\h\o\t\l\9\k\6\w\o\9\z\9\v\b\z\c\j\5\u\n\b\3\k\t\m\1\d\v\m\m\g\8\o\y\x\5\3\s\p\k\8\u\h\1\n\1\p\z\x\5\2\f\j\l\y\i\q\h\g\n\n\7\1\i\q\s\u\e\m\l\8\d\y\q\p\g\k\x\q\s\w\8\s\t\4\h\d\4\q\b\n\i\n\o\3\g\y\m\9\l\x\h\1\r\3\2\c\4\0\1\p\d\d\q\b\t\r\r\y\c\9\j\d\z\2\3\1\8\y\t\j\j\f\4\s\6\4\d\8\y\y\v\y\a\q\1\j\o\9\h\0\o\s\3\v\w\t\i\w\w\3\l\v\z\a\7\h\9\s\p\a\i\8\s\7\p\3\b\j\r\m\q\k\8\b\z\d\6\t\q\v\9\6\c\k\i\o\g\0\8\x\v\d\5\a\m\d\j\o\e\t\z\f\v\p\g\d\v\w\t\a\t\c\e\5\v\1\k\s\y\v\1\a\8\j\w\3\e\9\o\d\u\p\h\w\5\r\t\j\p\0\u\7\p\i\t\y\5\l\i\x\l\x\i\3\7\a\y\j\d\y\0\4\y\v\j\6\c\y\4\x\1\y\0\x\l\8\l\p\u\n\e\w\k\v\u\d\p\o\9\1\g\p\5\a\1\v\6\z\s\g\w\x\n\6\q\1\u\h\l\y\p\o\5\l\p\n\f\6\s\6\g\r\e\1\3\l\d\d\y\s\w\q\e\l\j\0\g\o\x\w\j\g\i\w\j\d\g\u\j\0\6\m\f\r\a\v\l\9\q\5\1\f\8\u\i\a\5\t\s\3\4\x\n\c\a\y\a\a\n\e\y\y\j\z\e\l\9\b\9\u\u\b\k\3\6\q\6\p\l\k\8\t\6\o\8\k\n\s\p\7\i\v\u\l\y\0\p\6\x\1\4\h\q\m\y\q\i\i\d\z\j\w\c\f\l\1\1\h ]] 00:40:46.108 09:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:46.108 09:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:46.108 [2024-12-09 09:52:53.499511] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:46.108 [2024-12-09 09:52:53.499626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60535 ] 00:40:46.367 [2024-12-09 09:52:53.650653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:46.367 [2024-12-09 09:52:53.684540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.367 [2024-12-09 09:52:53.714293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:46.367  [2024-12-09T09:52:54.128Z] Copying: 512/512 [B] (average 500 kBps) 00:40:46.626 00:40:46.626 09:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ g87wtorf0v19suu9pfdnq8gfc6c64mhf0qc27hxol6re7ig7lxhjsag02qawk2photl9k6wo9z9vbzcj5unb3ktm1dvmmg8oyx53spk8uh1n1pzx52fjlyiqhgnn71iqsueml8dyqpgkxqsw8st4hd4qbnino3gym9lxh1r32c401pddqbtrryc9jdz2318ytjjf4s64d8yyvyaq1jo9h0os3vwtiww3lvza7h9spai8s7p3bjrmqk8bzd6tqv96ckiog08xvd5amdjoetzfvpgdvwtatce5v1ksyv1a8jw3e9oduphw5rtjp0u7pity5lixlxi37ayjdy04yvj6cy4x1y0xl8lpunewkvudpo91gp5a1v6zsgwxn6q1uhlypo5lpnf6s6gre13lddyswqelj0goxwjgiwjdguj06mfravl9q51f8uia5ts34xncayaaneyyjzel9b9uubk36q6plk8t6o8knsp7ivuly0p6x14hqmyqiidzjwcfl11h == \g\8\7\w\t\o\r\f\0\v\1\9\s\u\u\9\p\f\d\n\q\8\g\f\c\6\c\6\4\m\h\f\0\q\c\2\7\h\x\o\l\6\r\e\7\i\g\7\l\x\h\j\s\a\g\0\2\q\a\w\k\2\p\h\o\t\l\9\k\6\w\o\9\z\9\v\b\z\c\j\5\u\n\b\3\k\t\m\1\d\v\m\m\g\8\o\y\x\5\3\s\p\k\8\u\h\1\n\1\p\z\x\5\2\f\j\l\y\i\q\h\g\n\n\7\1\i\q\s\u\e\m\l\8\d\y\q\p\g\k\x\q\s\w\8\s\t\4\h\d\4\q\b\n\i\n\o\3\g\y\m\9\l\x\h\1\r\3\2\c\4\0\1\p\d\d\q\b\t\r\r\y\c\9\j\d\z\2\3\1\8\y\t\j\j\f\4\s\6\4\d\8\y\y\v\y\a\q\1\j\o\9\h\0\o\s\3\v\w\t\i\w\w\3\l\v\z\a\7\h\9\s\p\a\i\8\s\7\p\3\b\j\r\m\q\k\8\b\z\d\6\t\q\v\9\6\c\k\i\o\g\0\8\x\v\d\5\a\m\d\j\o\e\t\z\f\v\p\g\d\v\w\t\a\t\c\e\5\v\1\k\s\y\v\1\a\8\j\w\3\e\9\o\d\u\p\h\w\5\r\t\j\p\0\u\7\p\i\t\y\5\l\i\x\l\x\i\3\7\a\y\j\d\y\0\4\y\v\j\6\c\y\4\x\1\y\0\x\l\8\l\p\u\n\e\w\k\v\u\d\p\o\9\1\g\p\5\a\1\v\6\z\s\g\w\x\n\6\q\1\u\h\l\y\p\o\5\l\p\n\f\6\s\6\g\r\e\1\3\l\d\d\y\s\w\q\e\l\j\0\g\o\x\w\j\g\i\w\j\d\g\u\j\0\6\m\f\r\a\v\l\9\q\5\1\f\8\u\i\a\5\t\s\3\4\x\n\c\a\y\a\a\n\e\y\y\j\z\e\l\9\b\9\u\u\b\k\3\6\q\6\p\l\k\8\t\6\o\8\k\n\s\p\7\i\v\u\l\y\0\p\6\x\1\4\h\q\m\y\q\i\i\d\z\j\w\c\f\l\1\1\h ]] 00:40:46.626 00:40:46.626 real 0m4.236s 00:40:46.626 user 0m2.450s 00:40:46.626 sys 0m0.788s 00:40:46.626 ************************************ 00:40:46.626 END TEST dd_flags_misc_forced_aio 00:40:46.626 ************************************ 00:40:46.626 09:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:46.626 09:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:46.626 09:52:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:40:46.626 09:52:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:46.626 09:52:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:46.626 ************************************ 00:40:46.626 END TEST spdk_dd_posix 00:40:46.626 ************************************ 00:40:46.626 00:40:46.626 real 0m18.770s 00:40:46.626 user 0m9.474s 00:40:46.626 sys 0m4.640s 00:40:46.626 09:52:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:46.626 09:52:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:46.626 09:52:54 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:40:46.626 09:52:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:46.626 09:52:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:46.626 09:52:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:40:46.626 ************************************ 00:40:46.626 START TEST spdk_dd_malloc 00:40:46.626 ************************************ 00:40:46.626 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:40:46.626 * Looking for test storage... 00:40:46.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:46.626 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:46.626 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:46.626 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:46.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.885 --rc genhtml_branch_coverage=1 00:40:46.885 --rc genhtml_function_coverage=1 00:40:46.885 --rc genhtml_legend=1 00:40:46.885 --rc geninfo_all_blocks=1 00:40:46.885 --rc geninfo_unexecuted_blocks=1 00:40:46.885 00:40:46.885 ' 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:46.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.885 --rc genhtml_branch_coverage=1 00:40:46.885 --rc genhtml_function_coverage=1 00:40:46.885 --rc genhtml_legend=1 00:40:46.885 --rc geninfo_all_blocks=1 00:40:46.885 --rc geninfo_unexecuted_blocks=1 00:40:46.885 00:40:46.885 ' 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:46.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.885 --rc genhtml_branch_coverage=1 00:40:46.885 --rc genhtml_function_coverage=1 00:40:46.885 --rc genhtml_legend=1 00:40:46.885 --rc geninfo_all_blocks=1 00:40:46.885 --rc geninfo_unexecuted_blocks=1 00:40:46.885 00:40:46.885 ' 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:46.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.885 --rc genhtml_branch_coverage=1 00:40:46.885 --rc genhtml_function_coverage=1 00:40:46.885 --rc genhtml_legend=1 00:40:46.885 --rc geninfo_all_blocks=1 00:40:46.885 --rc geninfo_unexecuted_blocks=1 00:40:46.885 00:40:46.885 ' 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:46.885 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:40:46.886 ************************************ 00:40:46.886 START TEST dd_malloc_copy 00:40:46.886 ************************************ 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:40:46.886 09:52:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:40:46.886 [2024-12-09 09:52:54.285495] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:46.886 [2024-12-09 09:52:54.285583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60619 ] 00:40:46.886 { 00:40:46.886 "subsystems": [ 00:40:46.886 { 00:40:46.886 "subsystem": "bdev", 00:40:46.886 "config": [ 00:40:46.886 { 00:40:46.886 "params": { 00:40:46.886 "block_size": 512, 00:40:46.886 "num_blocks": 1048576, 00:40:46.886 "name": "malloc0" 00:40:46.886 }, 00:40:46.886 "method": "bdev_malloc_create" 00:40:46.886 }, 00:40:46.886 { 00:40:46.886 "params": { 00:40:46.886 "block_size": 512, 00:40:46.886 "num_blocks": 1048576, 00:40:46.886 "name": "malloc1" 00:40:46.886 }, 00:40:46.886 "method": "bdev_malloc_create" 00:40:46.886 }, 00:40:46.886 { 00:40:46.886 "method": "bdev_wait_for_examine" 00:40:46.886 } 00:40:46.886 ] 00:40:46.886 } 00:40:46.886 ] 00:40:46.886 } 00:40:47.145 [2024-12-09 09:52:54.437929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:47.145 [2024-12-09 09:52:54.471725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.145 [2024-12-09 09:52:54.501530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:48.257  [2024-12-09T09:52:57.136Z] Copying: 187/512 [MB] (187 MBps) [2024-12-09T09:52:57.703Z] Copying: 377/512 [MB] (190 MBps) [2024-12-09T09:52:57.961Z] Copying: 512/512 [MB] (average 188 MBps) 00:40:50.459 00:40:50.459 09:52:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:40:50.459 09:52:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:40:50.459 09:52:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:40:50.459 09:52:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:40:50.459 [2024-12-09 09:52:57.851486] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:50.459 [2024-12-09 09:52:57.851579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60667 ] 00:40:50.459 { 00:40:50.459 "subsystems": [ 00:40:50.459 { 00:40:50.459 "subsystem": "bdev", 00:40:50.459 "config": [ 00:40:50.459 { 00:40:50.459 "params": { 00:40:50.459 "block_size": 512, 00:40:50.459 "num_blocks": 1048576, 00:40:50.459 "name": "malloc0" 00:40:50.459 }, 00:40:50.459 "method": "bdev_malloc_create" 00:40:50.459 }, 00:40:50.459 { 00:40:50.459 "params": { 00:40:50.459 "block_size": 512, 00:40:50.459 "num_blocks": 1048576, 00:40:50.459 "name": "malloc1" 00:40:50.459 }, 00:40:50.459 "method": "bdev_malloc_create" 00:40:50.459 }, 00:40:50.459 { 00:40:50.459 "method": "bdev_wait_for_examine" 00:40:50.459 } 00:40:50.459 ] 00:40:50.459 } 00:40:50.459 ] 00:40:50.459 } 00:40:50.718 [2024-12-09 09:52:58.006131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.718 [2024-12-09 09:52:58.044524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.718 [2024-12-09 09:52:58.077979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:52.093  [2024-12-09T09:53:00.528Z] Copying: 189/512 [MB] (189 MBps) [2024-12-09T09:53:01.094Z] Copying: 378/512 [MB] (189 MBps) [2024-12-09T09:53:01.661Z] Copying: 512/512 [MB] (average 189 MBps) 00:40:54.159 00:40:54.159 00:40:54.159 real 0m7.140s 00:40:54.159 user 0m6.465s 00:40:54.159 sys 0m0.509s 00:40:54.159 09:53:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.159 09:53:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:40:54.159 ************************************ 00:40:54.159 END TEST dd_malloc_copy 00:40:54.159 ************************************ 00:40:54.159 00:40:54.159 real 0m7.368s 00:40:54.159 user 0m6.594s 00:40:54.159 sys 0m0.612s 00:40:54.159 09:53:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.159 09:53:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:40:54.159 ************************************ 00:40:54.159 END TEST spdk_dd_malloc 00:40:54.159 ************************************ 00:40:54.159 09:53:01 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:40:54.159 09:53:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:54.159 09:53:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:54.159 09:53:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:40:54.159 ************************************ 00:40:54.159 START TEST spdk_dd_bdev_to_bdev 00:40:54.159 ************************************ 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:40:54.159 * Looking for test storage... 00:40:54.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:54.159 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:54.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.160 --rc genhtml_branch_coverage=1 00:40:54.160 --rc genhtml_function_coverage=1 00:40:54.160 --rc genhtml_legend=1 00:40:54.160 --rc geninfo_all_blocks=1 00:40:54.160 --rc geninfo_unexecuted_blocks=1 00:40:54.160 00:40:54.160 ' 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:54.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.160 --rc genhtml_branch_coverage=1 00:40:54.160 --rc genhtml_function_coverage=1 00:40:54.160 --rc genhtml_legend=1 00:40:54.160 --rc geninfo_all_blocks=1 00:40:54.160 --rc geninfo_unexecuted_blocks=1 00:40:54.160 00:40:54.160 ' 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:54.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.160 --rc genhtml_branch_coverage=1 00:40:54.160 --rc genhtml_function_coverage=1 00:40:54.160 --rc genhtml_legend=1 00:40:54.160 --rc geninfo_all_blocks=1 00:40:54.160 --rc geninfo_unexecuted_blocks=1 00:40:54.160 00:40:54.160 ' 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:54.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.160 --rc genhtml_branch_coverage=1 00:40:54.160 --rc genhtml_function_coverage=1 00:40:54.160 --rc genhtml_legend=1 00:40:54.160 --rc geninfo_all_blocks=1 00:40:54.160 --rc geninfo_unexecuted_blocks=1 00:40:54.160 00:40:54.160 ' 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:40:54.160 ************************************ 00:40:54.160 START TEST dd_inflate_file 00:40:54.160 ************************************ 00:40:54.160 09:53:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:40:54.419 [2024-12-09 09:53:01.692804] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:54.419 [2024-12-09 09:53:01.692900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60784 ] 00:40:54.419 [2024-12-09 09:53:01.846484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.419 [2024-12-09 09:53:01.885621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.419 [2024-12-09 09:53:01.918455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:54.678  [2024-12-09T09:53:02.180Z] Copying: 64/64 [MB] (average 1729 MBps) 00:40:54.678 00:40:54.678 00:40:54.678 real 0m0.527s 00:40:54.678 user 0m0.321s 00:40:54.678 sys 0m0.224s 00:40:54.678 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.678 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:40:54.678 ************************************ 00:40:54.678 END TEST dd_inflate_file 00:40:54.678 ************************************ 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:40:54.936 ************************************ 00:40:54.936 START TEST dd_copy_to_out_bdev 00:40:54.936 ************************************ 00:40:54.936 09:53:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:40:54.936 { 00:40:54.936 "subsystems": [ 00:40:54.936 { 00:40:54.936 "subsystem": "bdev", 00:40:54.936 "config": [ 00:40:54.936 { 00:40:54.936 "params": { 00:40:54.936 "trtype": "pcie", 00:40:54.936 "traddr": "0000:00:10.0", 00:40:54.936 "name": "Nvme0" 00:40:54.936 }, 00:40:54.936 "method": "bdev_nvme_attach_controller" 00:40:54.936 }, 00:40:54.936 { 00:40:54.936 "params": { 00:40:54.936 "trtype": "pcie", 00:40:54.936 "traddr": "0000:00:11.0", 00:40:54.936 "name": "Nvme1" 00:40:54.936 }, 00:40:54.936 "method": "bdev_nvme_attach_controller" 00:40:54.936 }, 00:40:54.936 { 00:40:54.936 "method": "bdev_wait_for_examine" 00:40:54.936 } 00:40:54.936 ] 00:40:54.936 } 00:40:54.936 ] 00:40:54.936 } 00:40:54.936 [2024-12-09 09:53:02.278855] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:54.936 [2024-12-09 09:53:02.278949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60813 ] 00:40:54.936 [2024-12-09 09:53:02.432081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.194 [2024-12-09 09:53:02.469149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:55.194 [2024-12-09 09:53:02.502293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:56.568  [2024-12-09T09:53:04.070Z] Copying: 63/64 [MB] (63 MBps) [2024-12-09T09:53:04.070Z] Copying: 64/64 [MB] (average 63 MBps) 00:40:56.568 00:40:56.568 00:40:56.568 real 0m1.667s 00:40:56.568 user 0m1.475s 00:40:56.568 sys 0m1.269s 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:40:56.568 ************************************ 00:40:56.568 END TEST dd_copy_to_out_bdev 00:40:56.568 ************************************ 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:40:56.568 ************************************ 00:40:56.568 START TEST dd_offset_magic 00:40:56.568 ************************************ 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:40:56.568 09:53:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:40:56.568 [2024-12-09 09:53:03.989614] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:56.568 [2024-12-09 09:53:03.989695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60858 ] 00:40:56.568 { 00:40:56.568 "subsystems": [ 00:40:56.568 { 00:40:56.568 "subsystem": "bdev", 00:40:56.568 "config": [ 00:40:56.568 { 00:40:56.568 "params": { 00:40:56.568 "trtype": "pcie", 00:40:56.568 "traddr": "0000:00:10.0", 00:40:56.568 "name": "Nvme0" 00:40:56.568 }, 00:40:56.568 "method": "bdev_nvme_attach_controller" 00:40:56.568 }, 00:40:56.568 { 00:40:56.568 "params": { 00:40:56.568 "trtype": "pcie", 00:40:56.568 "traddr": "0000:00:11.0", 00:40:56.568 "name": "Nvme1" 00:40:56.568 }, 00:40:56.568 "method": "bdev_nvme_attach_controller" 00:40:56.568 }, 00:40:56.568 { 00:40:56.568 "method": "bdev_wait_for_examine" 00:40:56.568 } 00:40:56.568 ] 00:40:56.568 } 00:40:56.568 ] 00:40:56.568 } 00:40:56.827 [2024-12-09 09:53:04.143106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:56.827 [2024-12-09 09:53:04.182160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.827 [2024-12-09 09:53:04.215851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:57.085  [2024-12-09T09:53:04.844Z] Copying: 65/65 [MB] (average 1120 MBps) 00:40:57.342 00:40:57.342 09:53:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:40:57.342 09:53:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:40:57.343 09:53:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:40:57.343 09:53:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:40:57.343 { 00:40:57.343 "subsystems": [ 00:40:57.343 { 00:40:57.343 "subsystem": "bdev", 00:40:57.343 "config": [ 00:40:57.343 { 00:40:57.343 "params": { 00:40:57.343 "trtype": "pcie", 00:40:57.343 "traddr": "0000:00:10.0", 00:40:57.343 "name": "Nvme0" 00:40:57.343 }, 00:40:57.343 "method": "bdev_nvme_attach_controller" 00:40:57.343 }, 00:40:57.343 { 00:40:57.343 "params": { 00:40:57.343 "trtype": "pcie", 00:40:57.343 "traddr": "0000:00:11.0", 00:40:57.343 "name": "Nvme1" 00:40:57.343 }, 00:40:57.343 "method": "bdev_nvme_attach_controller" 00:40:57.343 }, 00:40:57.343 { 00:40:57.343 "method": "bdev_wait_for_examine" 00:40:57.343 } 00:40:57.343 ] 00:40:57.343 } 00:40:57.343 ] 00:40:57.343 } 00:40:57.343 [2024-12-09 09:53:04.718176] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:57.343 [2024-12-09 09:53:04.718269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60872 ] 00:40:57.601 [2024-12-09 09:53:04.870986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:57.601 [2024-12-09 09:53:04.910093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.601 [2024-12-09 09:53:04.943352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:57.858  [2024-12-09T09:53:05.360Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:40:57.858 00:40:57.858 09:53:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:40:57.858 09:53:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:40:57.858 09:53:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:40:57.858 09:53:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:40:57.858 09:53:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:40:57.858 09:53:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:40:57.858 09:53:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:40:58.116 [2024-12-09 09:53:05.360764] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:58.116 [2024-12-09 09:53:05.360869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60889 ] 00:40:58.116 { 00:40:58.116 "subsystems": [ 00:40:58.116 { 00:40:58.116 "subsystem": "bdev", 00:40:58.116 "config": [ 00:40:58.116 { 00:40:58.116 "params": { 00:40:58.116 "trtype": "pcie", 00:40:58.116 "traddr": "0000:00:10.0", 00:40:58.116 "name": "Nvme0" 00:40:58.116 }, 00:40:58.116 "method": "bdev_nvme_attach_controller" 00:40:58.116 }, 00:40:58.116 { 00:40:58.116 "params": { 00:40:58.116 "trtype": "pcie", 00:40:58.116 "traddr": "0000:00:11.0", 00:40:58.116 "name": "Nvme1" 00:40:58.116 }, 00:40:58.116 "method": "bdev_nvme_attach_controller" 00:40:58.116 }, 00:40:58.116 { 00:40:58.116 "method": "bdev_wait_for_examine" 00:40:58.116 } 00:40:58.116 ] 00:40:58.116 } 00:40:58.116 ] 00:40:58.116 } 00:40:58.116 [2024-12-09 09:53:05.513176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:58.116 [2024-12-09 09:53:05.546298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.116 [2024-12-09 09:53:05.577426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:58.374  [2024-12-09T09:53:06.135Z] Copying: 65/65 [MB] (average 1181 MBps) 00:40:58.633 00:40:58.633 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:40:58.633 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:40:58.633 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:40:58.633 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:40:58.633 [2024-12-09 09:53:06.082670] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:58.633 [2024-12-09 09:53:06.082843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60909 ] 00:40:58.633 { 00:40:58.633 "subsystems": [ 00:40:58.633 { 00:40:58.633 "subsystem": "bdev", 00:40:58.633 "config": [ 00:40:58.633 { 00:40:58.633 "params": { 00:40:58.633 "trtype": "pcie", 00:40:58.633 "traddr": "0000:00:10.0", 00:40:58.633 "name": "Nvme0" 00:40:58.633 }, 00:40:58.633 "method": "bdev_nvme_attach_controller" 00:40:58.633 }, 00:40:58.633 { 00:40:58.633 "params": { 00:40:58.633 "trtype": "pcie", 00:40:58.633 "traddr": "0000:00:11.0", 00:40:58.633 "name": "Nvme1" 00:40:58.633 }, 00:40:58.633 "method": "bdev_nvme_attach_controller" 00:40:58.633 }, 00:40:58.633 { 00:40:58.633 "method": "bdev_wait_for_examine" 00:40:58.633 } 00:40:58.633 ] 00:40:58.633 } 00:40:58.633 ] 00:40:58.633 } 00:40:58.890 [2024-12-09 09:53:06.251420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:58.890 [2024-12-09 09:53:06.290887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.890 [2024-12-09 09:53:06.326619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:59.147  [2024-12-09T09:53:06.907Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:59.405 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:40:59.405 00:40:59.405 real 0m2.748s 00:40:59.405 user 0m2.057s 00:40:59.405 sys 0m0.672s 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:59.405 ************************************ 00:40:59.405 END TEST dd_offset_magic 00:40:59.405 ************************************ 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:40:59.405 09:53:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:40:59.405 [2024-12-09 09:53:06.779347] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:59.406 [2024-12-09 09:53:06.779437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60940 ] 00:40:59.406 { 00:40:59.406 "subsystems": [ 00:40:59.406 { 00:40:59.406 "subsystem": "bdev", 00:40:59.406 "config": [ 00:40:59.406 { 00:40:59.406 "params": { 00:40:59.406 "trtype": "pcie", 00:40:59.406 "traddr": "0000:00:10.0", 00:40:59.406 "name": "Nvme0" 00:40:59.406 }, 00:40:59.406 "method": "bdev_nvme_attach_controller" 00:40:59.406 }, 00:40:59.406 { 00:40:59.406 "params": { 00:40:59.406 "trtype": "pcie", 00:40:59.406 "traddr": "0000:00:11.0", 00:40:59.406 "name": "Nvme1" 00:40:59.406 }, 00:40:59.406 "method": "bdev_nvme_attach_controller" 00:40:59.406 }, 00:40:59.406 { 00:40:59.406 "method": "bdev_wait_for_examine" 00:40:59.406 } 00:40:59.406 ] 00:40:59.406 } 00:40:59.406 ] 00:40:59.406 } 00:40:59.664 [2024-12-09 09:53:06.931367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.664 [2024-12-09 09:53:06.964853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:59.664 [2024-12-09 09:53:06.996092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:59.664  [2024-12-09T09:53:07.424Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:40:59.922 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:40:59.922 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:40:59.922 [2024-12-09 09:53:07.408777] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:40:59.922 [2024-12-09 09:53:07.409090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60956 ] 00:40:59.922 { 00:40:59.922 "subsystems": [ 00:40:59.922 { 00:40:59.922 "subsystem": "bdev", 00:40:59.922 "config": [ 00:40:59.922 { 00:40:59.922 "params": { 00:40:59.922 "trtype": "pcie", 00:40:59.922 "traddr": "0000:00:10.0", 00:40:59.922 "name": "Nvme0" 00:40:59.922 }, 00:40:59.922 "method": "bdev_nvme_attach_controller" 00:40:59.922 }, 00:40:59.922 { 00:40:59.922 "params": { 00:40:59.922 "trtype": "pcie", 00:40:59.922 "traddr": "0000:00:11.0", 00:40:59.922 "name": "Nvme1" 00:40:59.922 }, 00:40:59.922 "method": "bdev_nvme_attach_controller" 00:40:59.922 }, 00:40:59.922 { 00:40:59.922 "method": "bdev_wait_for_examine" 00:40:59.922 } 00:40:59.922 ] 00:40:59.922 } 00:40:59.922 ] 00:40:59.922 } 00:41:00.180 [2024-12-09 09:53:07.563217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:00.180 [2024-12-09 09:53:07.602649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.180 [2024-12-09 09:53:07.636496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:00.438  [2024-12-09T09:53:08.199Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:41:00.697 00:41:00.697 09:53:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:41:00.697 ************************************ 00:41:00.697 END TEST spdk_dd_bdev_to_bdev 00:41:00.697 ************************************ 00:41:00.697 00:41:00.697 real 0m6.561s 00:41:00.697 user 0m4.954s 00:41:00.697 sys 0m2.733s 00:41:00.697 09:53:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:00.697 09:53:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:00.697 09:53:08 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:41:00.697 09:53:08 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:41:00.697 09:53:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:00.697 09:53:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:00.697 09:53:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:00.697 ************************************ 00:41:00.697 START TEST spdk_dd_uring 00:41:00.697 ************************************ 00:41:00.697 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:41:00.697 * Looking for test storage... 00:41:00.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:00.697 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:00.697 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:41:00.697 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:00.955 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:00.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.956 --rc genhtml_branch_coverage=1 00:41:00.956 --rc genhtml_function_coverage=1 00:41:00.956 --rc genhtml_legend=1 00:41:00.956 --rc geninfo_all_blocks=1 00:41:00.956 --rc geninfo_unexecuted_blocks=1 00:41:00.956 00:41:00.956 ' 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:00.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.956 --rc genhtml_branch_coverage=1 00:41:00.956 --rc genhtml_function_coverage=1 00:41:00.956 --rc genhtml_legend=1 00:41:00.956 --rc geninfo_all_blocks=1 00:41:00.956 --rc geninfo_unexecuted_blocks=1 00:41:00.956 00:41:00.956 ' 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:00.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.956 --rc genhtml_branch_coverage=1 00:41:00.956 --rc genhtml_function_coverage=1 00:41:00.956 --rc genhtml_legend=1 00:41:00.956 --rc geninfo_all_blocks=1 00:41:00.956 --rc geninfo_unexecuted_blocks=1 00:41:00.956 00:41:00.956 ' 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:00.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.956 --rc genhtml_branch_coverage=1 00:41:00.956 --rc genhtml_function_coverage=1 00:41:00.956 --rc genhtml_legend=1 00:41:00.956 --rc geninfo_all_blocks=1 00:41:00.956 --rc geninfo_unexecuted_blocks=1 00:41:00.956 00:41:00.956 ' 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:41:00.956 ************************************ 00:41:00.956 START TEST dd_uring_copy 00:41:00.956 ************************************ 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=c93zbjbhipfzt3o6yoe5gw48y2e3g77vpmlqayd0juqj7vomcrugkjaq6d8slc4xechay24y675uinutch30uw27uq4s4rumbtaxm41gb44aa3qw3ola76mhcqlmssyks4fvrxto6ksf6i5m42zs45rbwj1n89ie1g1s1mc2mzsjdggkxgva1u8gw8noi0ydl5n2477uxst11f34avgb58k98zef3hq35kxop8thvbgpq5c6e595y83o7xtzzxb9xa0gqtcsfq0vv7s1ckc9vwvgi8ld3txs9ab4mx962uaerciarl4nbbi7lyqc9ic2dixx41zba2dcbqfjibllq2d3a00v973hq7rbwued348pyfhd32z6d3c89adotz3n1n10w8t65ypowwix44j11d6gwk9xajbqpnrcqf8sfj8c8wrylclusd3wlohgvlin69vebsf5h9m19r9bx14duizk3ty5ez0y1pkdnn7nyfyvsfjiwhyrtli0j2hfhvcmyfgdx9eah0n2ndhf7ayevd09oxpseqbuvo09h1io68j37jprxdpegdgkkxtna31c5qja01p1cpje1bfdafiw4dfxqt49r9xpltj6puhrnyljsplkrkdkcsu5xmmjsuma7yoyf6u1wlkp2gnijo4w9zdf68p6mk5mmgdksyg4t1a3196h885ex3rqyhxixalw5zhvm2bmyqqiqc5ckponpbto6truk1kkajuthpgogyhhw2chfdfadl5fy3iae0mtg3nwvagitu8ewllueyqqusdirf1hs3txd8nz9t2yioweeiapsrtkptxtekgtwf7vnwt6plo654ao0l1nitq7x3k5t5832r69g3imv3f06dzidh5kvrtnlcc39hwsmmofwpruc3falsz4n5lidrdluxszcxo4jn0zv5kjg5n5z17n28cwq6klv8vnf0nugjrw2dkx9o9myqqndft51r0nkyq2nf1kyq8u0la3ywv8kxfqdu1m2n4ylqs263k2fyqv 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo c93zbjbhipfzt3o6yoe5gw48y2e3g77vpmlqayd0juqj7vomcrugkjaq6d8slc4xechay24y675uinutch30uw27uq4s4rumbtaxm41gb44aa3qw3ola76mhcqlmssyks4fvrxto6ksf6i5m42zs45rbwj1n89ie1g1s1mc2mzsjdggkxgva1u8gw8noi0ydl5n2477uxst11f34avgb58k98zef3hq35kxop8thvbgpq5c6e595y83o7xtzzxb9xa0gqtcsfq0vv7s1ckc9vwvgi8ld3txs9ab4mx962uaerciarl4nbbi7lyqc9ic2dixx41zba2dcbqfjibllq2d3a00v973hq7rbwued348pyfhd32z6d3c89adotz3n1n10w8t65ypowwix44j11d6gwk9xajbqpnrcqf8sfj8c8wrylclusd3wlohgvlin69vebsf5h9m19r9bx14duizk3ty5ez0y1pkdnn7nyfyvsfjiwhyrtli0j2hfhvcmyfgdx9eah0n2ndhf7ayevd09oxpseqbuvo09h1io68j37jprxdpegdgkkxtna31c5qja01p1cpje1bfdafiw4dfxqt49r9xpltj6puhrnyljsplkrkdkcsu5xmmjsuma7yoyf6u1wlkp2gnijo4w9zdf68p6mk5mmgdksyg4t1a3196h885ex3rqyhxixalw5zhvm2bmyqqiqc5ckponpbto6truk1kkajuthpgogyhhw2chfdfadl5fy3iae0mtg3nwvagitu8ewllueyqqusdirf1hs3txd8nz9t2yioweeiapsrtkptxtekgtwf7vnwt6plo654ao0l1nitq7x3k5t5832r69g3imv3f06dzidh5kvrtnlcc39hwsmmofwpruc3falsz4n5lidrdluxszcxo4jn0zv5kjg5n5z17n28cwq6klv8vnf0nugjrw2dkx9o9myqqndft51r0nkyq2nf1kyq8u0la3ywv8kxfqdu1m2n4ylqs263k2fyqv 00:41:00.956 09:53:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:41:00.956 [2024-12-09 09:53:08.350361] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:00.956 [2024-12-09 09:53:08.350641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61035 ] 00:41:01.214 [2024-12-09 09:53:08.511244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:01.214 [2024-12-09 09:53:08.553919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:01.214 [2024-12-09 09:53:08.588740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:01.780  [2024-12-09T09:53:09.540Z] Copying: 511/511 [MB] (average 1620 MBps) 00:41:02.038 00:41:02.038 09:53:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:41:02.038 09:53:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:41:02.038 09:53:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:41:02.038 09:53:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:41:02.038 { 00:41:02.038 "subsystems": [ 00:41:02.038 { 00:41:02.038 "subsystem": "bdev", 00:41:02.038 "config": [ 00:41:02.038 { 00:41:02.038 "params": { 00:41:02.038 "block_size": 512, 00:41:02.038 "num_blocks": 1048576, 00:41:02.038 "name": "malloc0" 00:41:02.038 }, 00:41:02.038 "method": "bdev_malloc_create" 00:41:02.038 }, 00:41:02.038 { 00:41:02.038 "params": { 00:41:02.038 "filename": "/dev/zram1", 00:41:02.038 "name": "uring0" 00:41:02.038 }, 00:41:02.038 "method": "bdev_uring_create" 00:41:02.038 }, 00:41:02.038 { 00:41:02.038 "method": "bdev_wait_for_examine" 00:41:02.038 } 00:41:02.038 ] 00:41:02.038 } 00:41:02.038 ] 00:41:02.038 } 00:41:02.038 [2024-12-09 09:53:09.426079] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:02.038 [2024-12-09 09:53:09.426182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61052 ] 00:41:02.296 [2024-12-09 09:53:09.586452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.296 [2024-12-09 09:53:09.621033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.296 [2024-12-09 09:53:09.653292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:03.668  [2024-12-09T09:53:12.104Z] Copying: 210/512 [MB] (210 MBps) [2024-12-09T09:53:12.362Z] Copying: 415/512 [MB] (205 MBps) [2024-12-09T09:53:12.624Z] Copying: 512/512 [MB] (average 209 MBps) 00:41:05.122 00:41:05.122 09:53:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:41:05.122 09:53:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:41:05.122 09:53:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:41:05.122 09:53:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:41:05.122 [2024-12-09 09:53:12.567811] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:05.122 [2024-12-09 09:53:12.568157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61096 ] 00:41:05.122 { 00:41:05.122 "subsystems": [ 00:41:05.122 { 00:41:05.122 "subsystem": "bdev", 00:41:05.122 "config": [ 00:41:05.122 { 00:41:05.122 "params": { 00:41:05.122 "block_size": 512, 00:41:05.122 "num_blocks": 1048576, 00:41:05.122 "name": "malloc0" 00:41:05.122 }, 00:41:05.122 "method": "bdev_malloc_create" 00:41:05.122 }, 00:41:05.122 { 00:41:05.122 "params": { 00:41:05.122 "filename": "/dev/zram1", 00:41:05.122 "name": "uring0" 00:41:05.122 }, 00:41:05.122 "method": "bdev_uring_create" 00:41:05.122 }, 00:41:05.122 { 00:41:05.122 "method": "bdev_wait_for_examine" 00:41:05.122 } 00:41:05.122 ] 00:41:05.122 } 00:41:05.122 ] 00:41:05.122 } 00:41:05.382 [2024-12-09 09:53:12.725562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.382 [2024-12-09 09:53:12.766208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.382 [2024-12-09 09:53:12.799434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:06.756  [2024-12-09T09:53:15.191Z] Copying: 164/512 [MB] (164 MBps) [2024-12-09T09:53:16.126Z] Copying: 324/512 [MB] (160 MBps) [2024-12-09T09:53:16.384Z] Copying: 480/512 [MB] (155 MBps) [2024-12-09T09:53:16.642Z] Copying: 512/512 [MB] (average 157 MBps) 00:41:09.140 00:41:09.140 09:53:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:41:09.141 09:53:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ c93zbjbhipfzt3o6yoe5gw48y2e3g77vpmlqayd0juqj7vomcrugkjaq6d8slc4xechay24y675uinutch30uw27uq4s4rumbtaxm41gb44aa3qw3ola76mhcqlmssyks4fvrxto6ksf6i5m42zs45rbwj1n89ie1g1s1mc2mzsjdggkxgva1u8gw8noi0ydl5n2477uxst11f34avgb58k98zef3hq35kxop8thvbgpq5c6e595y83o7xtzzxb9xa0gqtcsfq0vv7s1ckc9vwvgi8ld3txs9ab4mx962uaerciarl4nbbi7lyqc9ic2dixx41zba2dcbqfjibllq2d3a00v973hq7rbwued348pyfhd32z6d3c89adotz3n1n10w8t65ypowwix44j11d6gwk9xajbqpnrcqf8sfj8c8wrylclusd3wlohgvlin69vebsf5h9m19r9bx14duizk3ty5ez0y1pkdnn7nyfyvsfjiwhyrtli0j2hfhvcmyfgdx9eah0n2ndhf7ayevd09oxpseqbuvo09h1io68j37jprxdpegdgkkxtna31c5qja01p1cpje1bfdafiw4dfxqt49r9xpltj6puhrnyljsplkrkdkcsu5xmmjsuma7yoyf6u1wlkp2gnijo4w9zdf68p6mk5mmgdksyg4t1a3196h885ex3rqyhxixalw5zhvm2bmyqqiqc5ckponpbto6truk1kkajuthpgogyhhw2chfdfadl5fy3iae0mtg3nwvagitu8ewllueyqqusdirf1hs3txd8nz9t2yioweeiapsrtkptxtekgtwf7vnwt6plo654ao0l1nitq7x3k5t5832r69g3imv3f06dzidh5kvrtnlcc39hwsmmofwpruc3falsz4n5lidrdluxszcxo4jn0zv5kjg5n5z17n28cwq6klv8vnf0nugjrw2dkx9o9myqqndft51r0nkyq2nf1kyq8u0la3ywv8kxfqdu1m2n4ylqs263k2fyqv == \c\9\3\z\b\j\b\h\i\p\f\z\t\3\o\6\y\o\e\5\g\w\4\8\y\2\e\3\g\7\7\v\p\m\l\q\a\y\d\0\j\u\q\j\7\v\o\m\c\r\u\g\k\j\a\q\6\d\8\s\l\c\4\x\e\c\h\a\y\2\4\y\6\7\5\u\i\n\u\t\c\h\3\0\u\w\2\7\u\q\4\s\4\r\u\m\b\t\a\x\m\4\1\g\b\4\4\a\a\3\q\w\3\o\l\a\7\6\m\h\c\q\l\m\s\s\y\k\s\4\f\v\r\x\t\o\6\k\s\f\6\i\5\m\4\2\z\s\4\5\r\b\w\j\1\n\8\9\i\e\1\g\1\s\1\m\c\2\m\z\s\j\d\g\g\k\x\g\v\a\1\u\8\g\w\8\n\o\i\0\y\d\l\5\n\2\4\7\7\u\x\s\t\1\1\f\3\4\a\v\g\b\5\8\k\9\8\z\e\f\3\h\q\3\5\k\x\o\p\8\t\h\v\b\g\p\q\5\c\6\e\5\9\5\y\8\3\o\7\x\t\z\z\x\b\9\x\a\0\g\q\t\c\s\f\q\0\v\v\7\s\1\c\k\c\9\v\w\v\g\i\8\l\d\3\t\x\s\9\a\b\4\m\x\9\6\2\u\a\e\r\c\i\a\r\l\4\n\b\b\i\7\l\y\q\c\9\i\c\2\d\i\x\x\4\1\z\b\a\2\d\c\b\q\f\j\i\b\l\l\q\2\d\3\a\0\0\v\9\7\3\h\q\7\r\b\w\u\e\d\3\4\8\p\y\f\h\d\3\2\z\6\d\3\c\8\9\a\d\o\t\z\3\n\1\n\1\0\w\8\t\6\5\y\p\o\w\w\i\x\4\4\j\1\1\d\6\g\w\k\9\x\a\j\b\q\p\n\r\c\q\f\8\s\f\j\8\c\8\w\r\y\l\c\l\u\s\d\3\w\l\o\h\g\v\l\i\n\6\9\v\e\b\s\f\5\h\9\m\1\9\r\9\b\x\1\4\d\u\i\z\k\3\t\y\5\e\z\0\y\1\p\k\d\n\n\7\n\y\f\y\v\s\f\j\i\w\h\y\r\t\l\i\0\j\2\h\f\h\v\c\m\y\f\g\d\x\9\e\a\h\0\n\2\n\d\h\f\7\a\y\e\v\d\0\9\o\x\p\s\e\q\b\u\v\o\0\9\h\1\i\o\6\8\j\3\7\j\p\r\x\d\p\e\g\d\g\k\k\x\t\n\a\3\1\c\5\q\j\a\0\1\p\1\c\p\j\e\1\b\f\d\a\f\i\w\4\d\f\x\q\t\4\9\r\9\x\p\l\t\j\6\p\u\h\r\n\y\l\j\s\p\l\k\r\k\d\k\c\s\u\5\x\m\m\j\s\u\m\a\7\y\o\y\f\6\u\1\w\l\k\p\2\g\n\i\j\o\4\w\9\z\d\f\6\8\p\6\m\k\5\m\m\g\d\k\s\y\g\4\t\1\a\3\1\9\6\h\8\8\5\e\x\3\r\q\y\h\x\i\x\a\l\w\5\z\h\v\m\2\b\m\y\q\q\i\q\c\5\c\k\p\o\n\p\b\t\o\6\t\r\u\k\1\k\k\a\j\u\t\h\p\g\o\g\y\h\h\w\2\c\h\f\d\f\a\d\l\5\f\y\3\i\a\e\0\m\t\g\3\n\w\v\a\g\i\t\u\8\e\w\l\l\u\e\y\q\q\u\s\d\i\r\f\1\h\s\3\t\x\d\8\n\z\9\t\2\y\i\o\w\e\e\i\a\p\s\r\t\k\p\t\x\t\e\k\g\t\w\f\7\v\n\w\t\6\p\l\o\6\5\4\a\o\0\l\1\n\i\t\q\7\x\3\k\5\t\5\8\3\2\r\6\9\g\3\i\m\v\3\f\0\6\d\z\i\d\h\5\k\v\r\t\n\l\c\c\3\9\h\w\s\m\m\o\f\w\p\r\u\c\3\f\a\l\s\z\4\n\5\l\i\d\r\d\l\u\x\s\z\c\x\o\4\j\n\0\z\v\5\k\j\g\5\n\5\z\1\7\n\2\8\c\w\q\6\k\l\v\8\v\n\f\0\n\u\g\j\r\w\2\d\k\x\9\o\9\m\y\q\q\n\d\f\t\5\1\r\0\n\k\y\q\2\n\f\1\k\y\q\8\u\0\l\a\3\y\w\v\8\k\x\f\q\d\u\1\m\2\n\4\y\l\q\s\2\6\3\k\2\f\y\q\v ]] 00:41:09.141 09:53:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:41:09.141 09:53:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ c93zbjbhipfzt3o6yoe5gw48y2e3g77vpmlqayd0juqj7vomcrugkjaq6d8slc4xechay24y675uinutch30uw27uq4s4rumbtaxm41gb44aa3qw3ola76mhcqlmssyks4fvrxto6ksf6i5m42zs45rbwj1n89ie1g1s1mc2mzsjdggkxgva1u8gw8noi0ydl5n2477uxst11f34avgb58k98zef3hq35kxop8thvbgpq5c6e595y83o7xtzzxb9xa0gqtcsfq0vv7s1ckc9vwvgi8ld3txs9ab4mx962uaerciarl4nbbi7lyqc9ic2dixx41zba2dcbqfjibllq2d3a00v973hq7rbwued348pyfhd32z6d3c89adotz3n1n10w8t65ypowwix44j11d6gwk9xajbqpnrcqf8sfj8c8wrylclusd3wlohgvlin69vebsf5h9m19r9bx14duizk3ty5ez0y1pkdnn7nyfyvsfjiwhyrtli0j2hfhvcmyfgdx9eah0n2ndhf7ayevd09oxpseqbuvo09h1io68j37jprxdpegdgkkxtna31c5qja01p1cpje1bfdafiw4dfxqt49r9xpltj6puhrnyljsplkrkdkcsu5xmmjsuma7yoyf6u1wlkp2gnijo4w9zdf68p6mk5mmgdksyg4t1a3196h885ex3rqyhxixalw5zhvm2bmyqqiqc5ckponpbto6truk1kkajuthpgogyhhw2chfdfadl5fy3iae0mtg3nwvagitu8ewllueyqqusdirf1hs3txd8nz9t2yioweeiapsrtkptxtekgtwf7vnwt6plo654ao0l1nitq7x3k5t5832r69g3imv3f06dzidh5kvrtnlcc39hwsmmofwpruc3falsz4n5lidrdluxszcxo4jn0zv5kjg5n5z17n28cwq6klv8vnf0nugjrw2dkx9o9myqqndft51r0nkyq2nf1kyq8u0la3ywv8kxfqdu1m2n4ylqs263k2fyqv == \c\9\3\z\b\j\b\h\i\p\f\z\t\3\o\6\y\o\e\5\g\w\4\8\y\2\e\3\g\7\7\v\p\m\l\q\a\y\d\0\j\u\q\j\7\v\o\m\c\r\u\g\k\j\a\q\6\d\8\s\l\c\4\x\e\c\h\a\y\2\4\y\6\7\5\u\i\n\u\t\c\h\3\0\u\w\2\7\u\q\4\s\4\r\u\m\b\t\a\x\m\4\1\g\b\4\4\a\a\3\q\w\3\o\l\a\7\6\m\h\c\q\l\m\s\s\y\k\s\4\f\v\r\x\t\o\6\k\s\f\6\i\5\m\4\2\z\s\4\5\r\b\w\j\1\n\8\9\i\e\1\g\1\s\1\m\c\2\m\z\s\j\d\g\g\k\x\g\v\a\1\u\8\g\w\8\n\o\i\0\y\d\l\5\n\2\4\7\7\u\x\s\t\1\1\f\3\4\a\v\g\b\5\8\k\9\8\z\e\f\3\h\q\3\5\k\x\o\p\8\t\h\v\b\g\p\q\5\c\6\e\5\9\5\y\8\3\o\7\x\t\z\z\x\b\9\x\a\0\g\q\t\c\s\f\q\0\v\v\7\s\1\c\k\c\9\v\w\v\g\i\8\l\d\3\t\x\s\9\a\b\4\m\x\9\6\2\u\a\e\r\c\i\a\r\l\4\n\b\b\i\7\l\y\q\c\9\i\c\2\d\i\x\x\4\1\z\b\a\2\d\c\b\q\f\j\i\b\l\l\q\2\d\3\a\0\0\v\9\7\3\h\q\7\r\b\w\u\e\d\3\4\8\p\y\f\h\d\3\2\z\6\d\3\c\8\9\a\d\o\t\z\3\n\1\n\1\0\w\8\t\6\5\y\p\o\w\w\i\x\4\4\j\1\1\d\6\g\w\k\9\x\a\j\b\q\p\n\r\c\q\f\8\s\f\j\8\c\8\w\r\y\l\c\l\u\s\d\3\w\l\o\h\g\v\l\i\n\6\9\v\e\b\s\f\5\h\9\m\1\9\r\9\b\x\1\4\d\u\i\z\k\3\t\y\5\e\z\0\y\1\p\k\d\n\n\7\n\y\f\y\v\s\f\j\i\w\h\y\r\t\l\i\0\j\2\h\f\h\v\c\m\y\f\g\d\x\9\e\a\h\0\n\2\n\d\h\f\7\a\y\e\v\d\0\9\o\x\p\s\e\q\b\u\v\o\0\9\h\1\i\o\6\8\j\3\7\j\p\r\x\d\p\e\g\d\g\k\k\x\t\n\a\3\1\c\5\q\j\a\0\1\p\1\c\p\j\e\1\b\f\d\a\f\i\w\4\d\f\x\q\t\4\9\r\9\x\p\l\t\j\6\p\u\h\r\n\y\l\j\s\p\l\k\r\k\d\k\c\s\u\5\x\m\m\j\s\u\m\a\7\y\o\y\f\6\u\1\w\l\k\p\2\g\n\i\j\o\4\w\9\z\d\f\6\8\p\6\m\k\5\m\m\g\d\k\s\y\g\4\t\1\a\3\1\9\6\h\8\8\5\e\x\3\r\q\y\h\x\i\x\a\l\w\5\z\h\v\m\2\b\m\y\q\q\i\q\c\5\c\k\p\o\n\p\b\t\o\6\t\r\u\k\1\k\k\a\j\u\t\h\p\g\o\g\y\h\h\w\2\c\h\f\d\f\a\d\l\5\f\y\3\i\a\e\0\m\t\g\3\n\w\v\a\g\i\t\u\8\e\w\l\l\u\e\y\q\q\u\s\d\i\r\f\1\h\s\3\t\x\d\8\n\z\9\t\2\y\i\o\w\e\e\i\a\p\s\r\t\k\p\t\x\t\e\k\g\t\w\f\7\v\n\w\t\6\p\l\o\6\5\4\a\o\0\l\1\n\i\t\q\7\x\3\k\5\t\5\8\3\2\r\6\9\g\3\i\m\v\3\f\0\6\d\z\i\d\h\5\k\v\r\t\n\l\c\c\3\9\h\w\s\m\m\o\f\w\p\r\u\c\3\f\a\l\s\z\4\n\5\l\i\d\r\d\l\u\x\s\z\c\x\o\4\j\n\0\z\v\5\k\j\g\5\n\5\z\1\7\n\2\8\c\w\q\6\k\l\v\8\v\n\f\0\n\u\g\j\r\w\2\d\k\x\9\o\9\m\y\q\q\n\d\f\t\5\1\r\0\n\k\y\q\2\n\f\1\k\y\q\8\u\0\l\a\3\y\w\v\8\k\x\f\q\d\u\1\m\2\n\4\y\l\q\s\2\6\3\k\2\f\y\q\v ]] 00:41:09.141 09:53:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:41:09.399 09:53:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:41:09.399 09:53:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:41:09.399 09:53:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:41:09.399 09:53:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:41:09.657 [2024-12-09 09:53:16.909794] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:09.657 [2024-12-09 09:53:16.909893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61174 ] 00:41:09.657 { 00:41:09.657 "subsystems": [ 00:41:09.657 { 00:41:09.657 "subsystem": "bdev", 00:41:09.657 "config": [ 00:41:09.657 { 00:41:09.657 "params": { 00:41:09.657 "block_size": 512, 00:41:09.657 "num_blocks": 1048576, 00:41:09.657 "name": "malloc0" 00:41:09.657 }, 00:41:09.657 "method": "bdev_malloc_create" 00:41:09.657 }, 00:41:09.657 { 00:41:09.657 "params": { 00:41:09.657 "filename": "/dev/zram1", 00:41:09.657 "name": "uring0" 00:41:09.657 }, 00:41:09.657 "method": "bdev_uring_create" 00:41:09.657 }, 00:41:09.657 { 00:41:09.657 "method": "bdev_wait_for_examine" 00:41:09.657 } 00:41:09.657 ] 00:41:09.657 } 00:41:09.657 ] 00:41:09.657 } 00:41:09.657 [2024-12-09 09:53:17.057105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.657 [2024-12-09 09:53:17.091051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.657 [2024-12-09 09:53:17.122224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:11.032  [2024-12-09T09:53:19.469Z] Copying: 138/512 [MB] (138 MBps) [2024-12-09T09:53:20.403Z] Copying: 282/512 [MB] (143 MBps) [2024-12-09T09:53:20.970Z] Copying: 426/512 [MB] (144 MBps) [2024-12-09T09:53:21.229Z] Copying: 512/512 [MB] (average 141 MBps) 00:41:13.727 00:41:13.727 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:41:13.727 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:41:13.727 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:41:13.727 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:41:13.727 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:41:13.727 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:41:13.727 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:41:13.727 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:41:13.727 [2024-12-09 09:53:21.190940] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:13.727 [2024-12-09 09:53:21.191367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61236 ] 00:41:13.727 { 00:41:13.727 "subsystems": [ 00:41:13.727 { 00:41:13.727 "subsystem": "bdev", 00:41:13.727 "config": [ 00:41:13.727 { 00:41:13.727 "params": { 00:41:13.727 "block_size": 512, 00:41:13.727 "num_blocks": 1048576, 00:41:13.727 "name": "malloc0" 00:41:13.727 }, 00:41:13.727 "method": "bdev_malloc_create" 00:41:13.727 }, 00:41:13.727 { 00:41:13.727 "params": { 00:41:13.727 "filename": "/dev/zram1", 00:41:13.727 "name": "uring0" 00:41:13.727 }, 00:41:13.727 "method": "bdev_uring_create" 00:41:13.727 }, 00:41:13.727 { 00:41:13.727 "params": { 00:41:13.727 "name": "uring0" 00:41:13.727 }, 00:41:13.727 "method": "bdev_uring_delete" 00:41:13.727 }, 00:41:13.727 { 00:41:13.727 "method": "bdev_wait_for_examine" 00:41:13.727 } 00:41:13.727 ] 00:41:13.727 } 00:41:13.727 ] 00:41:13.727 } 00:41:13.986 [2024-12-09 09:53:21.344776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:13.986 [2024-12-09 09:53:21.383917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:13.986 [2024-12-09 09:53:21.418384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:14.244  [2024-12-09T09:53:22.004Z] Copying: 0/0 [B] (average 0 Bps) 00:41:14.502 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:14.502 09:53:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:41:14.502 [2024-12-09 09:53:21.908572] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:14.502 [2024-12-09 09:53:21.908663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61260 ] 00:41:14.502 { 00:41:14.502 "subsystems": [ 00:41:14.502 { 00:41:14.502 "subsystem": "bdev", 00:41:14.502 "config": [ 00:41:14.502 { 00:41:14.502 "params": { 00:41:14.502 "block_size": 512, 00:41:14.502 "num_blocks": 1048576, 00:41:14.502 "name": "malloc0" 00:41:14.502 }, 00:41:14.502 "method": "bdev_malloc_create" 00:41:14.502 }, 00:41:14.502 { 00:41:14.502 "params": { 00:41:14.502 "filename": "/dev/zram1", 00:41:14.502 "name": "uring0" 00:41:14.502 }, 00:41:14.502 "method": "bdev_uring_create" 00:41:14.502 }, 00:41:14.502 { 00:41:14.502 "params": { 00:41:14.502 "name": "uring0" 00:41:14.502 }, 00:41:14.502 "method": "bdev_uring_delete" 00:41:14.502 }, 00:41:14.502 { 00:41:14.502 "method": "bdev_wait_for_examine" 00:41:14.502 } 00:41:14.502 ] 00:41:14.502 } 00:41:14.502 ] 00:41:14.502 } 00:41:14.761 [2024-12-09 09:53:22.052934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:14.761 [2024-12-09 09:53:22.086684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.761 [2024-12-09 09:53:22.118271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:14.761 [2024-12-09 09:53:22.254459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:41:14.761 [2024-12-09 09:53:22.254753] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:41:14.761 [2024-12-09 09:53:22.254776] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:41:14.761 [2024-12-09 09:53:22.254788] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:15.020 [2024-12-09 09:53:22.427187] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:41:15.278 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:41:15.537 00:41:15.537 ************************************ 00:41:15.537 END TEST dd_uring_copy 00:41:15.537 ************************************ 00:41:15.537 real 0m14.529s 00:41:15.537 user 0m9.986s 00:41:15.537 sys 0m12.454s 00:41:15.537 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:15.537 09:53:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:41:15.537 00:41:15.537 real 0m14.783s 00:41:15.537 user 0m10.131s 00:41:15.537 sys 0m12.565s 00:41:15.537 09:53:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:15.537 09:53:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:41:15.537 ************************************ 00:41:15.537 END TEST spdk_dd_uring 00:41:15.537 ************************************ 00:41:15.537 09:53:22 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:41:15.537 09:53:22 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:15.537 09:53:22 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:15.537 09:53:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:15.537 ************************************ 00:41:15.537 START TEST spdk_dd_sparse 00:41:15.537 ************************************ 00:41:15.537 09:53:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:41:15.537 * Looking for test storage... 00:41:15.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:15.537 09:53:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:15.537 09:53:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:41:15.537 09:53:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.797 --rc genhtml_branch_coverage=1 00:41:15.797 --rc genhtml_function_coverage=1 00:41:15.797 --rc genhtml_legend=1 00:41:15.797 --rc geninfo_all_blocks=1 00:41:15.797 --rc geninfo_unexecuted_blocks=1 00:41:15.797 00:41:15.797 ' 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.797 --rc genhtml_branch_coverage=1 00:41:15.797 --rc genhtml_function_coverage=1 00:41:15.797 --rc genhtml_legend=1 00:41:15.797 --rc geninfo_all_blocks=1 00:41:15.797 --rc geninfo_unexecuted_blocks=1 00:41:15.797 00:41:15.797 ' 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.797 --rc genhtml_branch_coverage=1 00:41:15.797 --rc genhtml_function_coverage=1 00:41:15.797 --rc genhtml_legend=1 00:41:15.797 --rc geninfo_all_blocks=1 00:41:15.797 --rc geninfo_unexecuted_blocks=1 00:41:15.797 00:41:15.797 ' 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:15.797 --rc genhtml_branch_coverage=1 00:41:15.797 --rc genhtml_function_coverage=1 00:41:15.797 --rc genhtml_legend=1 00:41:15.797 --rc geninfo_all_blocks=1 00:41:15.797 --rc geninfo_unexecuted_blocks=1 00:41:15.797 00:41:15.797 ' 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:41:15.797 1+0 records in 00:41:15.797 1+0 records out 00:41:15.797 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00589199 s, 712 MB/s 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:41:15.797 1+0 records in 00:41:15.797 1+0 records out 00:41:15.797 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00762664 s, 550 MB/s 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:41:15.797 1+0 records in 00:41:15.797 1+0 records out 00:41:15.797 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0060808 s, 690 MB/s 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:41:15.797 ************************************ 00:41:15.797 START TEST dd_sparse_file_to_file 00:41:15.797 ************************************ 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:41:15.797 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:41:15.798 [2024-12-09 09:53:23.186049] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:15.798 [2024-12-09 09:53:23.186151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61359 ] 00:41:15.798 { 00:41:15.798 "subsystems": [ 00:41:15.798 { 00:41:15.798 "subsystem": "bdev", 00:41:15.798 "config": [ 00:41:15.798 { 00:41:15.798 "params": { 00:41:15.798 "block_size": 4096, 00:41:15.798 "filename": "dd_sparse_aio_disk", 00:41:15.798 "name": "dd_aio" 00:41:15.798 }, 00:41:15.798 "method": "bdev_aio_create" 00:41:15.798 }, 00:41:15.798 { 00:41:15.798 "params": { 00:41:15.798 "lvs_name": "dd_lvstore", 00:41:15.798 "bdev_name": "dd_aio" 00:41:15.798 }, 00:41:15.798 "method": "bdev_lvol_create_lvstore" 00:41:15.798 }, 00:41:15.798 { 00:41:15.798 "method": "bdev_wait_for_examine" 00:41:15.798 } 00:41:15.798 ] 00:41:15.798 } 00:41:15.798 ] 00:41:15.798 } 00:41:16.056 [2024-12-09 09:53:23.341087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:16.056 [2024-12-09 09:53:23.380003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.056 [2024-12-09 09:53:23.413058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:16.056  [2024-12-09T09:53:23.817Z] Copying: 12/36 [MB] (average 923 MBps) 00:41:16.315 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:41:16.315 00:41:16.315 real 0m0.624s 00:41:16.315 user 0m0.405s 00:41:16.315 sys 0m0.263s 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:41:16.315 ************************************ 00:41:16.315 END TEST dd_sparse_file_to_file 00:41:16.315 ************************************ 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:41:16.315 ************************************ 00:41:16.315 START TEST dd_sparse_file_to_bdev 00:41:16.315 ************************************ 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:41:16.315 09:53:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:16.573 [2024-12-09 09:53:23.852447] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:16.573 [2024-12-09 09:53:23.852561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61401 ] 00:41:16.573 { 00:41:16.573 "subsystems": [ 00:41:16.573 { 00:41:16.573 "subsystem": "bdev", 00:41:16.573 "config": [ 00:41:16.573 { 00:41:16.573 "params": { 00:41:16.573 "block_size": 4096, 00:41:16.573 "filename": "dd_sparse_aio_disk", 00:41:16.573 "name": "dd_aio" 00:41:16.573 }, 00:41:16.573 "method": "bdev_aio_create" 00:41:16.573 }, 00:41:16.574 { 00:41:16.574 "params": { 00:41:16.574 "lvs_name": "dd_lvstore", 00:41:16.574 "lvol_name": "dd_lvol", 00:41:16.574 "size_in_mib": 36, 00:41:16.574 "thin_provision": true 00:41:16.574 }, 00:41:16.574 "method": "bdev_lvol_create" 00:41:16.574 }, 00:41:16.574 { 00:41:16.574 "method": "bdev_wait_for_examine" 00:41:16.574 } 00:41:16.574 ] 00:41:16.574 } 00:41:16.574 ] 00:41:16.574 } 00:41:16.574 [2024-12-09 09:53:24.008783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:16.574 [2024-12-09 09:53:24.054595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.832 [2024-12-09 09:53:24.097875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:16.832  [2024-12-09T09:53:24.592Z] Copying: 12/36 [MB] (average 600 MBps) 00:41:17.090 00:41:17.090 00:41:17.090 real 0m0.600s 00:41:17.090 user 0m0.413s 00:41:17.090 sys 0m0.250s 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:17.090 ************************************ 00:41:17.090 END TEST dd_sparse_file_to_bdev 00:41:17.090 ************************************ 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:41:17.090 ************************************ 00:41:17.090 START TEST dd_sparse_bdev_to_file 00:41:17.090 ************************************ 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:41:17.090 09:53:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:41:17.090 { 00:41:17.090 "subsystems": [ 00:41:17.090 { 00:41:17.090 "subsystem": "bdev", 00:41:17.090 "config": [ 00:41:17.090 { 00:41:17.090 "params": { 00:41:17.090 "block_size": 4096, 00:41:17.090 "filename": "dd_sparse_aio_disk", 00:41:17.090 "name": "dd_aio" 00:41:17.090 }, 00:41:17.090 "method": "bdev_aio_create" 00:41:17.090 }, 00:41:17.090 { 00:41:17.090 "method": "bdev_wait_for_examine" 00:41:17.090 } 00:41:17.090 ] 00:41:17.090 } 00:41:17.090 ] 00:41:17.090 } 00:41:17.090 [2024-12-09 09:53:24.500857] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:17.090 [2024-12-09 09:53:24.500971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61434 ] 00:41:17.348 [2024-12-09 09:53:24.654978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:17.349 [2024-12-09 09:53:24.693943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:17.349 [2024-12-09 09:53:24.727455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:17.349  [2024-12-09T09:53:25.109Z] Copying: 12/36 [MB] (average 1090 MBps) 00:41:17.607 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:41:17.607 00:41:17.607 real 0m0.591s 00:41:17.607 user 0m0.400s 00:41:17.607 sys 0m0.241s 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:17.607 ************************************ 00:41:17.607 END TEST dd_sparse_bdev_to_file 00:41:17.607 ************************************ 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:41:17.607 00:41:17.607 real 0m2.192s 00:41:17.607 user 0m1.399s 00:41:17.607 sys 0m0.951s 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:17.607 09:53:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:41:17.607 ************************************ 00:41:17.607 END TEST spdk_dd_sparse 00:41:17.607 ************************************ 00:41:17.865 09:53:25 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:41:17.865 09:53:25 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:17.865 09:53:25 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:17.865 09:53:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:17.865 ************************************ 00:41:17.865 START TEST spdk_dd_negative 00:41:17.865 ************************************ 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:41:17.865 * Looking for test storage... 00:41:17.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.865 --rc genhtml_branch_coverage=1 00:41:17.865 --rc genhtml_function_coverage=1 00:41:17.865 --rc genhtml_legend=1 00:41:17.865 --rc geninfo_all_blocks=1 00:41:17.865 --rc geninfo_unexecuted_blocks=1 00:41:17.865 00:41:17.865 ' 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.865 --rc genhtml_branch_coverage=1 00:41:17.865 --rc genhtml_function_coverage=1 00:41:17.865 --rc genhtml_legend=1 00:41:17.865 --rc geninfo_all_blocks=1 00:41:17.865 --rc geninfo_unexecuted_blocks=1 00:41:17.865 00:41:17.865 ' 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.865 --rc genhtml_branch_coverage=1 00:41:17.865 --rc genhtml_function_coverage=1 00:41:17.865 --rc genhtml_legend=1 00:41:17.865 --rc geninfo_all_blocks=1 00:41:17.865 --rc geninfo_unexecuted_blocks=1 00:41:17.865 00:41:17.865 ' 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.865 --rc genhtml_branch_coverage=1 00:41:17.865 --rc genhtml_function_coverage=1 00:41:17.865 --rc genhtml_legend=1 00:41:17.865 --rc geninfo_all_blocks=1 00:41:17.865 --rc geninfo_unexecuted_blocks=1 00:41:17.865 00:41:17.865 ' 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:41:17.865 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:17.866 ************************************ 00:41:17.866 START TEST dd_invalid_arguments 00:41:17.866 ************************************ 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:17.866 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:41:18.125 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:41:18.125 00:41:18.125 CPU options: 00:41:18.125 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:41:18.125 (like [0,1,10]) 00:41:18.125 --lcores lcore to CPU mapping list. The list is in the format: 00:41:18.125 [<,lcores[@CPUs]>...] 00:41:18.125 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:41:18.125 Within the group, '-' is used for range separator, 00:41:18.125 ',' is used for single number separator. 00:41:18.125 '( )' can be omitted for single element group, 00:41:18.125 '@' can be omitted if cpus and lcores have the same value 00:41:18.125 --disable-cpumask-locks Disable CPU core lock files. 00:41:18.125 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:41:18.125 pollers in the app support interrupt mode) 00:41:18.125 -p, --main-core main (primary) core for DPDK 00:41:18.125 00:41:18.125 Configuration options: 00:41:18.125 -c, --config, --json JSON config file 00:41:18.125 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:41:18.125 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:41:18.125 --wait-for-rpc wait for RPCs to initialize subsystems 00:41:18.125 --rpcs-allowed comma-separated list of permitted RPCS 00:41:18.125 --json-ignore-init-errors don't exit on invalid config entry 00:41:18.125 00:41:18.125 Memory options: 00:41:18.125 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:41:18.125 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:41:18.125 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:41:18.125 -R, --huge-unlink unlink huge files after initialization 00:41:18.125 -n, --mem-channels number of memory channels used for DPDK 00:41:18.125 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:41:18.125 --msg-mempool-size global message memory pool size in count (default: 262143) 00:41:18.125 --no-huge run without using hugepages 00:41:18.125 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:41:18.125 -i, --shm-id shared memory ID (optional) 00:41:18.125 -g, --single-file-segments force creating just one hugetlbfs file 00:41:18.125 00:41:18.125 PCI options: 00:41:18.125 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:41:18.125 -B, --pci-blocked pci addr to block (can be used more than once) 00:41:18.125 -u, --no-pci disable PCI access 00:41:18.125 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:41:18.125 00:41:18.125 Log options: 00:41:18.125 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:41:18.125 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:41:18.125 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:41:18.125 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:41:18.125 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:41:18.125 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:41:18.125 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:41:18.125 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:41:18.125 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:41:18.125 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:41:18.125 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:41:18.125 --silence-noticelog disable notice level logging to stderr 00:41:18.125 00:41:18.125 Trace options: 00:41:18.125 --num-trace-entries number of trace entries for each core, must be power of 2, 00:41:18.125 setting 0 to disable trace (default 32768) 00:41:18.125 Tracepoints vary in size and can use more than one trace entry. 00:41:18.125 -e, --tpoint-group [:] 00:41:18.125 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:41:18.125 [2024-12-09 09:53:25.390053] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:41:18.125 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:41:18.125 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:41:18.125 bdev_raid, scheduler, all). 00:41:18.125 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:41:18.125 a tracepoint group. First tpoint inside a group can be enabled by 00:41:18.125 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:41:18.125 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:41:18.125 in /include/spdk_internal/trace_defs.h 00:41:18.125 00:41:18.125 Other options: 00:41:18.125 -h, --help show this usage 00:41:18.125 -v, --version print SPDK version 00:41:18.125 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:41:18.125 --env-context Opaque context for use of the env implementation 00:41:18.125 00:41:18.125 Application specific: 00:41:18.125 [--------- DD Options ---------] 00:41:18.125 --if Input file. Must specify either --if or --ib. 00:41:18.125 --ib Input bdev. Must specifier either --if or --ib 00:41:18.125 --of Output file. Must specify either --of or --ob. 00:41:18.125 --ob Output bdev. Must specify either --of or --ob. 00:41:18.126 --iflag Input file flags. 00:41:18.126 --oflag Output file flags. 00:41:18.126 --bs I/O unit size (default: 4096) 00:41:18.126 --qd Queue depth (default: 2) 00:41:18.126 --count I/O unit count. The number of I/O units to copy. (default: all) 00:41:18.126 --skip Skip this many I/O units at start of input. (default: 0) 00:41:18.126 --seek Skip this many I/O units at start of output. (default: 0) 00:41:18.126 --aio Force usage of AIO. (by default io_uring is used if available) 00:41:18.126 --sparse Enable hole skipping in input target 00:41:18.126 Available iflag and oflag values: 00:41:18.126 append - append mode 00:41:18.126 direct - use direct I/O for data 00:41:18.126 directory - fail unless a directory 00:41:18.126 dsync - use synchronized I/O for data 00:41:18.126 noatime - do not update access time 00:41:18.126 noctty - do not assign controlling terminal from file 00:41:18.126 nofollow - do not follow symlinks 00:41:18.126 nonblock - use non-blocking I/O 00:41:18.126 sync - use synchronized I/O for data and metadata 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:18.126 00:41:18.126 real 0m0.067s 00:41:18.126 user 0m0.037s 00:41:18.126 sys 0m0.029s 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:41:18.126 ************************************ 00:41:18.126 END TEST dd_invalid_arguments 00:41:18.126 ************************************ 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:18.126 ************************************ 00:41:18.126 START TEST dd_double_input 00:41:18.126 ************************************ 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:41:18.126 [2024-12-09 09:53:25.511655] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:18.126 00:41:18.126 real 0m0.095s 00:41:18.126 user 0m0.060s 00:41:18.126 sys 0m0.032s 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:41:18.126 ************************************ 00:41:18.126 END TEST dd_double_input 00:41:18.126 ************************************ 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:18.126 ************************************ 00:41:18.126 START TEST dd_double_output 00:41:18.126 ************************************ 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:18.126 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:41:18.384 [2024-12-09 09:53:25.643922] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:18.384 00:41:18.384 real 0m0.083s 00:41:18.384 user 0m0.053s 00:41:18.384 sys 0m0.028s 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.384 ************************************ 00:41:18.384 END TEST dd_double_output 00:41:18.384 ************************************ 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:18.384 ************************************ 00:41:18.384 START TEST dd_no_input 00:41:18.384 ************************************ 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.384 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:41:18.385 [2024-12-09 09:53:25.772216] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:18.385 00:41:18.385 real 0m0.099s 00:41:18.385 user 0m0.061s 00:41:18.385 sys 0m0.037s 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:41:18.385 ************************************ 00:41:18.385 END TEST dd_no_input 00:41:18.385 ************************************ 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:18.385 ************************************ 00:41:18.385 START TEST dd_no_output 00:41:18.385 ************************************ 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:18.385 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:18.643 [2024-12-09 09:53:25.889220] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:18.643 00:41:18.643 real 0m0.066s 00:41:18.643 user 0m0.040s 00:41:18.643 sys 0m0.025s 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:41:18.643 ************************************ 00:41:18.643 END TEST dd_no_output 00:41:18.643 ************************************ 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:18.643 ************************************ 00:41:18.643 START TEST dd_wrong_blocksize 00:41:18.643 ************************************ 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:18.643 09:53:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:41:18.643 [2024-12-09 09:53:25.992473] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:41:18.643 09:53:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:41:18.643 09:53:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:18.643 09:53:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:18.643 09:53:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:18.643 00:41:18.643 real 0m0.064s 00:41:18.643 user 0m0.043s 00:41:18.643 sys 0m0.020s 00:41:18.643 09:53:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.643 09:53:26 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:41:18.643 ************************************ 00:41:18.643 END TEST dd_wrong_blocksize 00:41:18.643 ************************************ 00:41:18.643 09:53:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:41:18.643 09:53:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:18.643 09:53:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:18.644 ************************************ 00:41:18.644 START TEST dd_smaller_blocksize 00:41:18.644 ************************************ 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:18.644 09:53:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:41:18.644 [2024-12-09 09:53:26.102892] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:18.644 [2024-12-09 09:53:26.103012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61660 ] 00:41:18.902 [2024-12-09 09:53:26.252084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.902 [2024-12-09 09:53:26.302500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.902 [2024-12-09 09:53:26.338265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:19.159 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:41:19.421 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:41:19.421 [2024-12-09 09:53:26.884557] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:41:19.421 [2024-12-09 09:53:26.884636] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:19.678 [2024-12-09 09:53:26.954597] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:19.678 00:41:19.678 real 0m1.016s 00:41:19.678 user 0m0.406s 00:41:19.678 sys 0m0.500s 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:41:19.678 ************************************ 00:41:19.678 END TEST dd_smaller_blocksize 00:41:19.678 ************************************ 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:19.678 ************************************ 00:41:19.678 START TEST dd_invalid_count 00:41:19.678 ************************************ 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:41:19.678 [2024-12-09 09:53:27.155297] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:19.678 00:41:19.678 real 0m0.068s 00:41:19.678 user 0m0.038s 00:41:19.678 sys 0m0.029s 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:19.678 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:41:19.678 ************************************ 00:41:19.678 END TEST dd_invalid_count 00:41:19.678 ************************************ 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:19.937 ************************************ 00:41:19.937 START TEST dd_invalid_oflag 00:41:19.937 ************************************ 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:41:19.937 [2024-12-09 09:53:27.276349] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:19.937 00:41:19.937 real 0m0.095s 00:41:19.937 user 0m0.060s 00:41:19.937 sys 0m0.034s 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:41:19.937 ************************************ 00:41:19.937 END TEST dd_invalid_oflag 00:41:19.937 ************************************ 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:19.937 ************************************ 00:41:19.937 START TEST dd_invalid_iflag 00:41:19.937 ************************************ 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:41:19.937 [2024-12-09 09:53:27.409652] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:41:19.937 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:19.938 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:19.938 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:19.938 00:41:19.938 real 0m0.091s 00:41:19.938 user 0m0.057s 00:41:19.938 sys 0m0.031s 00:41:19.938 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:19.938 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:41:19.938 ************************************ 00:41:19.938 END TEST dd_invalid_iflag 00:41:19.938 ************************************ 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:20.196 ************************************ 00:41:20.196 START TEST dd_unknown_flag 00:41:20.196 ************************************ 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:20.196 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:41:20.196 [2024-12-09 09:53:27.540632] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:20.196 [2024-12-09 09:53:27.540757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61758 ] 00:41:20.196 [2024-12-09 09:53:27.693299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:20.455 [2024-12-09 09:53:27.726739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.455 [2024-12-09 09:53:27.756413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:20.455 [2024-12-09 09:53:27.776045] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:41:20.455 [2024-12-09 09:53:27.776103] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:20.455 [2024-12-09 09:53:27.776162] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:41:20.455 [2024-12-09 09:53:27.776176] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:20.455 [2024-12-09 09:53:27.776406] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:41:20.455 [2024-12-09 09:53:27.776429] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:20.455 [2024-12-09 09:53:27.776493] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:41:20.455 [2024-12-09 09:53:27.776504] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:41:20.455 [2024-12-09 09:53:27.844057] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:41:20.455 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:41:20.455 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:20.455 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:41:20.455 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:41:20.455 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:41:20.455 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:20.455 00:41:20.455 real 0m0.478s 00:41:20.455 user 0m0.271s 00:41:20.455 sys 0m0.120s 00:41:20.455 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:20.455 09:53:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:41:20.455 ************************************ 00:41:20.455 END TEST dd_unknown_flag 00:41:20.455 ************************************ 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:20.713 ************************************ 00:41:20.713 START TEST dd_invalid_json 00:41:20.713 ************************************ 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.713 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:20.714 09:53:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:41:20.714 [2024-12-09 09:53:28.048207] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:20.714 [2024-12-09 09:53:28.048302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61786 ] 00:41:20.714 [2024-12-09 09:53:28.196603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:20.972 [2024-12-09 09:53:28.230392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.972 [2024-12-09 09:53:28.230473] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:41:20.972 [2024-12-09 09:53:28.230503] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:20.972 [2024-12-09 09:53:28.230517] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:20.972 [2024-12-09 09:53:28.230567] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:20.972 00:41:20.972 real 0m0.364s 00:41:20.972 user 0m0.205s 00:41:20.972 sys 0m0.056s 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:41:20.972 ************************************ 00:41:20.972 END TEST dd_invalid_json 00:41:20.972 ************************************ 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:20.972 ************************************ 00:41:20.972 START TEST dd_invalid_seek 00:41:20.972 ************************************ 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:20.972 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:41:20.972 { 00:41:20.972 "subsystems": [ 00:41:20.972 { 00:41:20.972 "subsystem": "bdev", 00:41:20.972 "config": [ 00:41:20.972 { 00:41:20.972 "params": { 00:41:20.972 "block_size": 512, 00:41:20.972 "num_blocks": 512, 00:41:20.972 "name": "malloc0" 00:41:20.972 }, 00:41:20.972 "method": "bdev_malloc_create" 00:41:20.972 }, 00:41:20.972 { 00:41:20.972 "params": { 00:41:20.972 "block_size": 512, 00:41:20.972 "num_blocks": 512, 00:41:20.972 "name": "malloc1" 00:41:20.972 }, 00:41:20.972 "method": "bdev_malloc_create" 00:41:20.972 }, 00:41:20.972 { 00:41:20.972 "method": "bdev_wait_for_examine" 00:41:20.972 } 00:41:20.972 ] 00:41:20.972 } 00:41:20.972 ] 00:41:20.972 } 00:41:20.972 [2024-12-09 09:53:28.471312] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:20.972 [2024-12-09 09:53:28.471434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61816 ] 00:41:21.230 [2024-12-09 09:53:28.621249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.230 [2024-12-09 09:53:28.654321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.230 [2024-12-09 09:53:28.684913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:21.230 [2024-12-09 09:53:28.731164] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:41:21.230 [2024-12-09 09:53:28.731242] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:21.489 [2024-12-09 09:53:28.808781] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:21.489 00:41:21.489 real 0m0.519s 00:41:21.489 user 0m0.361s 00:41:21.489 sys 0m0.120s 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:41:21.489 ************************************ 00:41:21.489 END TEST dd_invalid_seek 00:41:21.489 ************************************ 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:21.489 ************************************ 00:41:21.489 START TEST dd_invalid_skip 00:41:21.489 ************************************ 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:21.489 09:53:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:41:21.747 { 00:41:21.747 "subsystems": [ 00:41:21.747 { 00:41:21.747 "subsystem": "bdev", 00:41:21.747 "config": [ 00:41:21.747 { 00:41:21.747 "params": { 00:41:21.747 "block_size": 512, 00:41:21.747 "num_blocks": 512, 00:41:21.747 "name": "malloc0" 00:41:21.747 }, 00:41:21.747 "method": "bdev_malloc_create" 00:41:21.747 }, 00:41:21.747 { 00:41:21.747 "params": { 00:41:21.747 "block_size": 512, 00:41:21.747 "num_blocks": 512, 00:41:21.747 "name": "malloc1" 00:41:21.747 }, 00:41:21.747 "method": "bdev_malloc_create" 00:41:21.747 }, 00:41:21.747 { 00:41:21.747 "method": "bdev_wait_for_examine" 00:41:21.747 } 00:41:21.747 ] 00:41:21.747 } 00:41:21.747 ] 00:41:21.747 } 00:41:21.747 [2024-12-09 09:53:29.031398] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:21.747 [2024-12-09 09:53:29.031549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61849 ] 00:41:21.747 [2024-12-09 09:53:29.183510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.747 [2024-12-09 09:53:29.216881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.747 [2024-12-09 09:53:29.246751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:22.005 [2024-12-09 09:53:29.292443] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:41:22.005 [2024-12-09 09:53:29.292512] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:22.005 [2024-12-09 09:53:29.360421] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:41:22.005 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:22.006 00:41:22.006 real 0m0.511s 00:41:22.006 user 0m0.344s 00:41:22.006 sys 0m0.117s 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:41:22.006 ************************************ 00:41:22.006 END TEST dd_invalid_skip 00:41:22.006 ************************************ 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.006 09:53:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:22.264 ************************************ 00:41:22.264 START TEST dd_invalid_input_count 00:41:22.264 ************************************ 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:22.264 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:41:22.264 [2024-12-09 09:53:29.561450] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:22.264 [2024-12-09 09:53:29.561543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61883 ] 00:41:22.264 { 00:41:22.264 "subsystems": [ 00:41:22.264 { 00:41:22.264 "subsystem": "bdev", 00:41:22.264 "config": [ 00:41:22.264 { 00:41:22.264 "params": { 00:41:22.264 "block_size": 512, 00:41:22.264 "num_blocks": 512, 00:41:22.264 "name": "malloc0" 00:41:22.264 }, 00:41:22.264 "method": "bdev_malloc_create" 00:41:22.264 }, 00:41:22.264 { 00:41:22.264 "params": { 00:41:22.264 "block_size": 512, 00:41:22.264 "num_blocks": 512, 00:41:22.264 "name": "malloc1" 00:41:22.264 }, 00:41:22.264 "method": "bdev_malloc_create" 00:41:22.264 }, 00:41:22.264 { 00:41:22.264 "method": "bdev_wait_for_examine" 00:41:22.264 } 00:41:22.264 ] 00:41:22.264 } 00:41:22.264 ] 00:41:22.264 } 00:41:22.264 [2024-12-09 09:53:29.711984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:22.264 [2024-12-09 09:53:29.745328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.523 [2024-12-09 09:53:29.775339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:22.523 [2024-12-09 09:53:29.821015] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:41:22.523 [2024-12-09 09:53:29.821079] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:22.523 [2024-12-09 09:53:29.889367] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:41:22.523 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:41:22.523 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:22.523 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:41:22.523 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:41:22.523 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:41:22.523 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:22.523 00:41:22.523 real 0m0.489s 00:41:22.523 user 0m0.350s 00:41:22.523 sys 0m0.108s 00:41:22.523 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.523 ************************************ 00:41:22.523 END TEST dd_invalid_input_count 00:41:22.523 ************************************ 00:41:22.523 09:53:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:22.781 ************************************ 00:41:22.781 START TEST dd_invalid_output_count 00:41:22.781 ************************************ 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:22.781 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.782 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:22.782 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:22.782 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:22.782 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:41:22.782 [2024-12-09 09:53:30.089383] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:22.782 [2024-12-09 09:53:30.089474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61916 ] 00:41:22.782 { 00:41:22.782 "subsystems": [ 00:41:22.782 { 00:41:22.782 "subsystem": "bdev", 00:41:22.782 "config": [ 00:41:22.782 { 00:41:22.782 "params": { 00:41:22.782 "block_size": 512, 00:41:22.782 "num_blocks": 512, 00:41:22.782 "name": "malloc0" 00:41:22.782 }, 00:41:22.782 "method": "bdev_malloc_create" 00:41:22.782 }, 00:41:22.782 { 00:41:22.782 "method": "bdev_wait_for_examine" 00:41:22.782 } 00:41:22.782 ] 00:41:22.782 } 00:41:22.782 ] 00:41:22.782 } 00:41:22.782 [2024-12-09 09:53:30.246915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:22.782 [2024-12-09 09:53:30.279861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.040 [2024-12-09 09:53:30.309688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:23.040 [2024-12-09 09:53:30.347066] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:41:23.040 [2024-12-09 09:53:30.347153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:23.041 [2024-12-09 09:53:30.415870] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:41:23.041 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:41:23.041 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:23.041 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:41:23.041 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:41:23.041 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:41:23.041 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:23.041 00:41:23.041 real 0m0.481s 00:41:23.041 user 0m0.340s 00:41:23.041 sys 0m0.098s 00:41:23.041 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:23.041 09:53:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:41:23.041 ************************************ 00:41:23.041 END TEST dd_invalid_output_count 00:41:23.041 ************************************ 00:41:23.299 09:53:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:41:23.299 09:53:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:23.299 09:53:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:23.299 09:53:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:23.299 ************************************ 00:41:23.299 START TEST dd_bs_not_multiple 00:41:23.299 ************************************ 00:41:23.299 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:41:23.299 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:23.300 09:53:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:41:23.300 { 00:41:23.300 "subsystems": [ 00:41:23.300 { 00:41:23.300 "subsystem": "bdev", 00:41:23.300 "config": [ 00:41:23.300 { 00:41:23.300 "params": { 00:41:23.300 "block_size": 512, 00:41:23.300 "num_blocks": 512, 00:41:23.300 "name": "malloc0" 00:41:23.300 }, 00:41:23.300 "method": "bdev_malloc_create" 00:41:23.300 }, 00:41:23.300 { 00:41:23.300 "params": { 00:41:23.300 "block_size": 512, 00:41:23.300 "num_blocks": 512, 00:41:23.300 "name": "malloc1" 00:41:23.300 }, 00:41:23.300 "method": "bdev_malloc_create" 00:41:23.300 }, 00:41:23.300 { 00:41:23.300 "method": "bdev_wait_for_examine" 00:41:23.300 } 00:41:23.300 ] 00:41:23.300 } 00:41:23.300 ] 00:41:23.300 } 00:41:23.300 [2024-12-09 09:53:30.630748] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:23.300 [2024-12-09 09:53:30.630842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61948 ] 00:41:23.300 [2024-12-09 09:53:30.780133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.558 [2024-12-09 09:53:30.813276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.559 [2024-12-09 09:53:30.844087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:23.559 [2024-12-09 09:53:30.890099] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:41:23.559 [2024-12-09 09:53:30.890167] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:23.559 [2024-12-09 09:53:30.959519] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:23.817 00:41:23.817 real 0m0.503s 00:41:23.817 user 0m0.336s 00:41:23.817 sys 0m0.119s 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:41:23.817 ************************************ 00:41:23.817 END TEST dd_bs_not_multiple 00:41:23.817 ************************************ 00:41:23.817 00:41:23.817 real 0m5.973s 00:41:23.817 user 0m3.397s 00:41:23.817 sys 0m2.039s 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:23.817 09:53:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:23.817 ************************************ 00:41:23.817 END TEST spdk_dd_negative 00:41:23.817 ************************************ 00:41:23.817 00:41:23.817 real 1m13.925s 00:41:23.817 user 0m49.485s 00:41:23.817 sys 0m29.051s 00:41:23.817 09:53:31 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:23.817 09:53:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:23.817 ************************************ 00:41:23.817 END TEST spdk_dd 00:41:23.817 ************************************ 00:41:23.817 09:53:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:41:23.817 09:53:31 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:41:23.817 09:53:31 -- spdk/autotest.sh@260 -- # timing_exit lib 00:41:23.817 09:53:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:23.817 09:53:31 -- common/autotest_common.sh@10 -- # set +x 00:41:23.817 09:53:31 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:41:23.817 09:53:31 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:41:23.817 09:53:31 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:41:23.817 09:53:31 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:41:23.817 09:53:31 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:41:23.817 09:53:31 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:41:23.817 09:53:31 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:41:23.817 09:53:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:23.817 09:53:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:23.817 09:53:31 -- common/autotest_common.sh@10 -- # set +x 00:41:23.817 ************************************ 00:41:23.817 START TEST nvmf_tcp 00:41:23.817 ************************************ 00:41:23.817 09:53:31 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:41:23.817 * Looking for test storage... 00:41:23.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:41:23.817 09:53:31 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:23.817 09:53:31 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:23.817 09:53:31 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:24.076 09:53:31 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:24.076 09:53:31 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:24.076 09:53:31 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:24.076 09:53:31 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:24.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.076 --rc genhtml_branch_coverage=1 00:41:24.076 --rc genhtml_function_coverage=1 00:41:24.076 --rc genhtml_legend=1 00:41:24.076 --rc geninfo_all_blocks=1 00:41:24.076 --rc geninfo_unexecuted_blocks=1 00:41:24.076 00:41:24.076 ' 00:41:24.076 09:53:31 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:24.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.076 --rc genhtml_branch_coverage=1 00:41:24.076 --rc genhtml_function_coverage=1 00:41:24.076 --rc genhtml_legend=1 00:41:24.076 --rc geninfo_all_blocks=1 00:41:24.076 --rc geninfo_unexecuted_blocks=1 00:41:24.076 00:41:24.076 ' 00:41:24.076 09:53:31 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:24.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.076 --rc genhtml_branch_coverage=1 00:41:24.076 --rc genhtml_function_coverage=1 00:41:24.076 --rc genhtml_legend=1 00:41:24.076 --rc geninfo_all_blocks=1 00:41:24.076 --rc geninfo_unexecuted_blocks=1 00:41:24.076 00:41:24.076 ' 00:41:24.076 09:53:31 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:24.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.076 --rc genhtml_branch_coverage=1 00:41:24.076 --rc genhtml_function_coverage=1 00:41:24.076 --rc genhtml_legend=1 00:41:24.076 --rc geninfo_all_blocks=1 00:41:24.076 --rc geninfo_unexecuted_blocks=1 00:41:24.076 00:41:24.076 ' 00:41:24.076 09:53:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:41:24.076 09:53:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:41:24.076 09:53:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:41:24.076 09:53:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:24.076 09:53:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:24.076 09:53:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:24.076 ************************************ 00:41:24.076 START TEST nvmf_target_core 00:41:24.076 ************************************ 00:41:24.076 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:41:24.077 * Looking for test storage... 00:41:24.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:24.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.077 --rc genhtml_branch_coverage=1 00:41:24.077 --rc genhtml_function_coverage=1 00:41:24.077 --rc genhtml_legend=1 00:41:24.077 --rc geninfo_all_blocks=1 00:41:24.077 --rc geninfo_unexecuted_blocks=1 00:41:24.077 00:41:24.077 ' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:24.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.077 --rc genhtml_branch_coverage=1 00:41:24.077 --rc genhtml_function_coverage=1 00:41:24.077 --rc genhtml_legend=1 00:41:24.077 --rc geninfo_all_blocks=1 00:41:24.077 --rc geninfo_unexecuted_blocks=1 00:41:24.077 00:41:24.077 ' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:24.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.077 --rc genhtml_branch_coverage=1 00:41:24.077 --rc genhtml_function_coverage=1 00:41:24.077 --rc genhtml_legend=1 00:41:24.077 --rc geninfo_all_blocks=1 00:41:24.077 --rc geninfo_unexecuted_blocks=1 00:41:24.077 00:41:24.077 ' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:24.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.077 --rc genhtml_branch_coverage=1 00:41:24.077 --rc genhtml_function_coverage=1 00:41:24.077 --rc genhtml_legend=1 00:41:24.077 --rc geninfo_all_blocks=1 00:41:24.077 --rc geninfo_unexecuted_blocks=1 00:41:24.077 00:41:24.077 ' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:24.077 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:24.077 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:41:24.337 ************************************ 00:41:24.337 START TEST nvmf_host_management 00:41:24.337 ************************************ 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:41:24.337 * Looking for test storage... 00:41:24.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:24.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.337 --rc genhtml_branch_coverage=1 00:41:24.337 --rc genhtml_function_coverage=1 00:41:24.337 --rc genhtml_legend=1 00:41:24.337 --rc geninfo_all_blocks=1 00:41:24.337 --rc geninfo_unexecuted_blocks=1 00:41:24.337 00:41:24.337 ' 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:24.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.337 --rc genhtml_branch_coverage=1 00:41:24.337 --rc genhtml_function_coverage=1 00:41:24.337 --rc genhtml_legend=1 00:41:24.337 --rc geninfo_all_blocks=1 00:41:24.337 --rc geninfo_unexecuted_blocks=1 00:41:24.337 00:41:24.337 ' 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:24.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.337 --rc genhtml_branch_coverage=1 00:41:24.337 --rc genhtml_function_coverage=1 00:41:24.337 --rc genhtml_legend=1 00:41:24.337 --rc geninfo_all_blocks=1 00:41:24.337 --rc geninfo_unexecuted_blocks=1 00:41:24.337 00:41:24.337 ' 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:24.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.337 --rc genhtml_branch_coverage=1 00:41:24.337 --rc genhtml_function_coverage=1 00:41:24.337 --rc genhtml_legend=1 00:41:24.337 --rc geninfo_all_blocks=1 00:41:24.337 --rc geninfo_unexecuted_blocks=1 00:41:24.337 00:41:24.337 ' 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.337 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:24.338 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:24.338 Cannot find device "nvmf_init_br" 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:24.338 Cannot find device "nvmf_init_br2" 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:24.338 Cannot find device "nvmf_tgt_br" 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:24.338 Cannot find device "nvmf_tgt_br2" 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:24.338 Cannot find device "nvmf_init_br" 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:24.338 Cannot find device "nvmf_init_br2" 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:41:24.338 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:24.665 Cannot find device "nvmf_tgt_br" 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:24.665 Cannot find device "nvmf_tgt_br2" 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:24.665 Cannot find device "nvmf_br" 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:24.665 Cannot find device "nvmf_init_if" 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:24.665 Cannot find device "nvmf_init_if2" 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:24.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:24.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:24.665 09:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:24.665 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:24.665 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:24.665 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:24.665 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:24.665 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:24.665 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:24.665 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:24.665 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:24.665 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:24.964 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:24.964 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:41:24.964 00:41:24.964 --- 10.0.0.3 ping statistics --- 00:41:24.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.964 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:24.964 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:24.964 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:41:24.964 00:41:24.964 --- 10.0.0.4 ping statistics --- 00:41:24.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.964 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:24.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:24.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:41:24.964 00:41:24.964 --- 10.0.0.1 ping statistics --- 00:41:24.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.964 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:24.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:24.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:41:24.964 00:41:24.964 --- 10.0.0.2 ping statistics --- 00:41:24.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.964 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62289 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62289 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62289 ']' 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:24.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:24.964 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:24.964 [2024-12-09 09:53:32.289751] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:24.964 [2024-12-09 09:53:32.289871] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:24.964 [2024-12-09 09:53:32.448271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:25.229 [2024-12-09 09:53:32.484768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:25.229 [2024-12-09 09:53:32.484827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:25.229 [2024-12-09 09:53:32.484840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:25.229 [2024-12-09 09:53:32.484848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:25.229 [2024-12-09 09:53:32.484855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:25.229 [2024-12-09 09:53:32.485670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:25.229 [2024-12-09 09:53:32.485736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:25.229 [2024-12-09 09:53:32.485812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:25.229 [2024-12-09 09:53:32.485818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:25.229 [2024-12-09 09:53:32.517368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:25.229 [2024-12-09 09:53:32.606848] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:25.229 Malloc0 00:41:25.229 [2024-12-09 09:53:32.675596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62336 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62336 /var/tmp/bdevperf.sock 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62336 ']' 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:25.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:25.229 { 00:41:25.229 "params": { 00:41:25.229 "name": "Nvme$subsystem", 00:41:25.229 "trtype": "$TEST_TRANSPORT", 00:41:25.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:25.229 "adrfam": "ipv4", 00:41:25.229 "trsvcid": "$NVMF_PORT", 00:41:25.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:25.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:25.229 "hdgst": ${hdgst:-false}, 00:41:25.229 "ddgst": ${ddgst:-false} 00:41:25.229 }, 00:41:25.229 "method": "bdev_nvme_attach_controller" 00:41:25.229 } 00:41:25.229 EOF 00:41:25.229 )") 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:41:25.229 09:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:25.229 "params": { 00:41:25.229 "name": "Nvme0", 00:41:25.229 "trtype": "tcp", 00:41:25.229 "traddr": "10.0.0.3", 00:41:25.229 "adrfam": "ipv4", 00:41:25.229 "trsvcid": "4420", 00:41:25.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:25.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:25.229 "hdgst": false, 00:41:25.229 "ddgst": false 00:41:25.229 }, 00:41:25.229 "method": "bdev_nvme_attach_controller" 00:41:25.229 }' 00:41:25.488 [2024-12-09 09:53:32.769459] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:25.488 [2024-12-09 09:53:32.770034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62336 ] 00:41:25.488 [2024-12-09 09:53:32.917488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:25.488 [2024-12-09 09:53:32.951538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:25.746 [2024-12-09 09:53:32.990733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:25.746 Running I/O for 10 seconds... 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:41:25.746 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:41:26.004 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:41:26.004 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:41:26.004 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:41:26.004 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:41:26.004 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.004 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:26.004 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.263 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:41:26.263 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:41:26.263 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:41:26.263 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:41:26.263 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:41:26.263 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:26.263 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.263 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:26.263 [2024-12-09 09:53:33.544668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.544736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.544775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.544792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.544811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.544827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.544846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.544860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.544877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.544891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.544910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.544926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.544943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.544974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.544996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.263 [2024-12-09 09:53:33.545380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.263 [2024-12-09 09:53:33.545399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.545946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.545982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.264 [2024-12-09 09:53:33.546648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.264 [2024-12-09 09:53:33.546663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.546680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.265 [2024-12-09 09:53:33.546694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.546712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.265 [2024-12-09 09:53:33.546728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.546746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.265 [2024-12-09 09:53:33.546760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.546776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.265 [2024-12-09 09:53:33.546791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.546810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.265 [2024-12-09 09:53:33.546825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.546842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.265 [2024-12-09 09:53:33.546856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.546874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:26.265 [2024-12-09 09:53:33.546890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.546907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab4c00 is same with the state(6) to be set 00:41:26.265 [2024-12-09 09:53:33.547119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:26.265 [2024-12-09 09:53:33.547159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.547180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:26.265 [2024-12-09 09:53:33.547195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.547211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:26.265 [2024-12-09 09:53:33.547226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.547242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:26.265 [2024-12-09 09:53:33.547256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:26.265 [2024-12-09 09:53:33.547269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5ce0 is same with the state(6) to be set 00:41:26.265 [2024-12-09 09:53:33.548767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting contro 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.265 ller 00:41:26.265 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:26.265 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.265 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:26.265 task offset: 65536 on job bdev=Nvme0n1 fails 00:41:26.265 00:41:26.265 Latency(us) 00:41:26.265 [2024-12-09T09:53:33.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:26.265 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:26.265 Job: Nvme0n1 ended in about 0.45 seconds with error 00:41:26.265 Verification LBA range: start 0x0 length 0x400 00:41:26.265 Nvme0n1 : 0.45 1145.88 71.62 143.23 0.00 47756.39 3276.80 47662.55 00:41:26.265 [2024-12-09T09:53:33.767Z] =================================================================================================================== 00:41:26.265 [2024-12-09T09:53:33.767Z] Total : 1145.88 71.62 143.23 0.00 47756.39 3276.80 47662.55 00:41:26.265 [2024-12-09 09:53:33.551160] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:26.265 [2024-12-09 09:53:33.551203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab5ce0 (9): Bad file descriptor 00:41:26.265 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.265 09:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:41:26.265 [2024-12-09 09:53:33.558590] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62336 00:41:27.199 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62336) - No such process 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:27.199 { 00:41:27.199 "params": { 00:41:27.199 "name": "Nvme$subsystem", 00:41:27.199 "trtype": "$TEST_TRANSPORT", 00:41:27.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:27.199 "adrfam": "ipv4", 00:41:27.199 "trsvcid": "$NVMF_PORT", 00:41:27.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:27.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:27.199 "hdgst": ${hdgst:-false}, 00:41:27.199 "ddgst": ${ddgst:-false} 00:41:27.199 }, 00:41:27.199 "method": "bdev_nvme_attach_controller" 00:41:27.199 } 00:41:27.199 EOF 00:41:27.199 )") 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:41:27.199 09:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:27.199 "params": { 00:41:27.199 "name": "Nvme0", 00:41:27.199 "trtype": "tcp", 00:41:27.199 "traddr": "10.0.0.3", 00:41:27.199 "adrfam": "ipv4", 00:41:27.199 "trsvcid": "4420", 00:41:27.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:27.199 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:27.199 "hdgst": false, 00:41:27.199 "ddgst": false 00:41:27.199 }, 00:41:27.199 "method": "bdev_nvme_attach_controller" 00:41:27.199 }' 00:41:27.199 [2024-12-09 09:53:34.626833] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:27.199 [2024-12-09 09:53:34.626978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62376 ] 00:41:27.457 [2024-12-09 09:53:34.815567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.457 [2024-12-09 09:53:34.864455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:27.457 [2024-12-09 09:53:34.915521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:27.715 Running I/O for 1 seconds... 00:41:28.649 716.00 IOPS, 44.75 MiB/s 00:41:28.649 Latency(us) 00:41:28.649 [2024-12-09T09:53:36.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:28.649 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:28.649 Verification LBA range: start 0x0 length 0x400 00:41:28.649 Nvme0n1 : 1.09 766.77 47.92 0.00 0.00 80473.69 9055.88 75783.45 00:41:28.649 [2024-12-09T09:53:36.151Z] =================================================================================================================== 00:41:28.649 [2024-12-09T09:53:36.151Z] Total : 766.77 47.92 0.00 0.00 80473.69 9055.88 75783.45 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:29.215 rmmod nvme_tcp 00:41:29.215 rmmod nvme_fabrics 00:41:29.215 rmmod nvme_keyring 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62289 ']' 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62289 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62289 ']' 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62289 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62289 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:29.215 killing process with pid 62289 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62289' 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62289 00:41:29.215 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62289 00:41:29.474 [2024-12-09 09:53:36.730605] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:29.474 09:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:41:29.733 00:41:29.733 real 0m5.419s 00:41:29.733 user 0m19.490s 00:41:29.733 sys 0m1.329s 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:29.733 ************************************ 00:41:29.733 END TEST nvmf_host_management 00:41:29.733 ************************************ 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:41:29.733 ************************************ 00:41:29.733 START TEST nvmf_lvol 00:41:29.733 ************************************ 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:41:29.733 * Looking for test storage... 00:41:29.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:29.733 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:29.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.734 --rc genhtml_branch_coverage=1 00:41:29.734 --rc genhtml_function_coverage=1 00:41:29.734 --rc genhtml_legend=1 00:41:29.734 --rc geninfo_all_blocks=1 00:41:29.734 --rc geninfo_unexecuted_blocks=1 00:41:29.734 00:41:29.734 ' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:29.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.734 --rc genhtml_branch_coverage=1 00:41:29.734 --rc genhtml_function_coverage=1 00:41:29.734 --rc genhtml_legend=1 00:41:29.734 --rc geninfo_all_blocks=1 00:41:29.734 --rc geninfo_unexecuted_blocks=1 00:41:29.734 00:41:29.734 ' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:29.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.734 --rc genhtml_branch_coverage=1 00:41:29.734 --rc genhtml_function_coverage=1 00:41:29.734 --rc genhtml_legend=1 00:41:29.734 --rc geninfo_all_blocks=1 00:41:29.734 --rc geninfo_unexecuted_blocks=1 00:41:29.734 00:41:29.734 ' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:29.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.734 --rc genhtml_branch_coverage=1 00:41:29.734 --rc genhtml_function_coverage=1 00:41:29.734 --rc genhtml_legend=1 00:41:29.734 --rc geninfo_all_blocks=1 00:41:29.734 --rc geninfo_unexecuted_blocks=1 00:41:29.734 00:41:29.734 ' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:29.734 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:29.734 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:29.993 Cannot find device "nvmf_init_br" 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:29.993 Cannot find device "nvmf_init_br2" 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:29.993 Cannot find device "nvmf_tgt_br" 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:29.993 Cannot find device "nvmf_tgt_br2" 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:29.993 Cannot find device "nvmf_init_br" 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:29.993 Cannot find device "nvmf_init_br2" 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:29.993 Cannot find device "nvmf_tgt_br" 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:29.993 Cannot find device "nvmf_tgt_br2" 00:41:29.993 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:29.994 Cannot find device "nvmf_br" 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:29.994 Cannot find device "nvmf_init_if" 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:29.994 Cannot find device "nvmf_init_if2" 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:29.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:29.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:29.994 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:30.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:30.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:41:30.253 00:41:30.253 --- 10.0.0.3 ping statistics --- 00:41:30.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.253 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:30.253 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:30.253 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:41:30.253 00:41:30.253 --- 10.0.0.4 ping statistics --- 00:41:30.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.253 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:30.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:30.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:41:30.253 00:41:30.253 --- 10.0.0.1 ping statistics --- 00:41:30.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.253 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:30.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:30.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:41:30.253 00:41:30.253 --- 10.0.0.2 ping statistics --- 00:41:30.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.253 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62647 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62647 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62647 ']' 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:30.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:30.253 09:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:30.253 [2024-12-09 09:53:37.681214] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:30.253 [2024-12-09 09:53:37.682100] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:30.511 [2024-12-09 09:53:37.842267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:30.511 [2024-12-09 09:53:37.891039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:30.511 [2024-12-09 09:53:37.891117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:30.511 [2024-12-09 09:53:37.891137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:30.511 [2024-12-09 09:53:37.891151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:30.511 [2024-12-09 09:53:37.891163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:30.511 [2024-12-09 09:53:37.892175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:30.511 [2024-12-09 09:53:37.892272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:30.511 [2024-12-09 09:53:37.892285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.511 [2024-12-09 09:53:37.929644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:30.769 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:30.769 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:41:30.769 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:30.769 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:30.769 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:30.769 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:30.769 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:31.028 [2024-12-09 09:53:38.433235] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:31.028 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:31.594 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:41:31.594 09:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:31.852 09:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:41:31.852 09:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:41:32.418 09:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:41:32.677 09:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=da87f29e-eba6-4cbb-821d-a48a3ed0ee83 00:41:32.677 09:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u da87f29e-eba6-4cbb-821d-a48a3ed0ee83 lvol 20 00:41:32.935 09:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c67325a0-743f-41c2-bfc6-1302dc2d0e97 00:41:32.935 09:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:33.500 09:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c67325a0-743f-41c2-bfc6-1302dc2d0e97 00:41:33.500 09:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:41:34.066 [2024-12-09 09:53:41.300618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:34.066 09:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:41:34.323 09:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:41:34.323 09:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62726 00:41:34.323 09:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:41:35.257 09:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c67325a0-743f-41c2-bfc6-1302dc2d0e97 MY_SNAPSHOT 00:41:35.516 09:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9e17155a-9151-4f59-8897-d4ec8ecf5953 00:41:35.516 09:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c67325a0-743f-41c2-bfc6-1302dc2d0e97 30 00:41:36.083 09:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9e17155a-9151-4f59-8897-d4ec8ecf5953 MY_CLONE 00:41:36.341 09:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7e0d81d4-fe3a-4267-b7df-0dbe5c87486c 00:41:36.341 09:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 7e0d81d4-fe3a-4267-b7df-0dbe5c87486c 00:41:36.907 09:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62726 00:41:45.013 Initializing NVMe Controllers 00:41:45.013 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:41:45.013 Controller IO queue size 128, less than required. 00:41:45.013 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:45.013 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:41:45.013 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:41:45.013 Initialization complete. Launching workers. 00:41:45.013 ======================================================== 00:41:45.013 Latency(us) 00:41:45.013 Device Information : IOPS MiB/s Average min max 00:41:45.013 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8737.50 34.13 14650.89 1926.28 79114.63 00:41:45.013 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8884.10 34.70 14412.10 2511.48 52452.16 00:41:45.013 ======================================================== 00:41:45.013 Total : 17621.60 68.83 14530.50 1926.28 79114.63 00:41:45.013 00:41:45.013 09:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:45.013 09:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c67325a0-743f-41c2-bfc6-1302dc2d0e97 00:41:45.271 09:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da87f29e-eba6-4cbb-821d-a48a3ed0ee83 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:45.837 rmmod nvme_tcp 00:41:45.837 rmmod nvme_fabrics 00:41:45.837 rmmod nvme_keyring 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62647 ']' 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62647 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62647 ']' 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62647 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62647 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:45.837 killing process with pid 62647 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62647' 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62647 00:41:45.837 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62647 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:46.095 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:41:46.354 00:41:46.354 real 0m16.591s 00:41:46.354 user 1m8.343s 00:41:46.354 sys 0m4.431s 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:46.354 ************************************ 00:41:46.354 END TEST nvmf_lvol 00:41:46.354 ************************************ 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:41:46.354 ************************************ 00:41:46.354 START TEST nvmf_lvs_grow 00:41:46.354 ************************************ 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:41:46.354 * Looking for test storage... 00:41:46.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:46.354 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:46.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.613 --rc genhtml_branch_coverage=1 00:41:46.613 --rc genhtml_function_coverage=1 00:41:46.613 --rc genhtml_legend=1 00:41:46.613 --rc geninfo_all_blocks=1 00:41:46.613 --rc geninfo_unexecuted_blocks=1 00:41:46.613 00:41:46.613 ' 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:46.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.613 --rc genhtml_branch_coverage=1 00:41:46.613 --rc genhtml_function_coverage=1 00:41:46.613 --rc genhtml_legend=1 00:41:46.613 --rc geninfo_all_blocks=1 00:41:46.613 --rc geninfo_unexecuted_blocks=1 00:41:46.613 00:41:46.613 ' 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:46.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.613 --rc genhtml_branch_coverage=1 00:41:46.613 --rc genhtml_function_coverage=1 00:41:46.613 --rc genhtml_legend=1 00:41:46.613 --rc geninfo_all_blocks=1 00:41:46.613 --rc geninfo_unexecuted_blocks=1 00:41:46.613 00:41:46.613 ' 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:46.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.613 --rc genhtml_branch_coverage=1 00:41:46.613 --rc genhtml_function_coverage=1 00:41:46.613 --rc genhtml_legend=1 00:41:46.613 --rc geninfo_all_blocks=1 00:41:46.613 --rc geninfo_unexecuted_blocks=1 00:41:46.613 00:41:46.613 ' 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:46.613 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:46.614 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:46.614 Cannot find device "nvmf_init_br" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:46.614 Cannot find device "nvmf_init_br2" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:46.614 Cannot find device "nvmf_tgt_br" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:46.614 Cannot find device "nvmf_tgt_br2" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:46.614 Cannot find device "nvmf_init_br" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:46.614 Cannot find device "nvmf_init_br2" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:46.614 Cannot find device "nvmf_tgt_br" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:46.614 Cannot find device "nvmf_tgt_br2" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:46.614 Cannot find device "nvmf_br" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:46.614 Cannot find device "nvmf_init_if" 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:41:46.614 09:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:46.614 Cannot find device "nvmf_init_if2" 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:46.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:46.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:46.614 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:46.873 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:46.873 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:41:46.873 00:41:46.873 --- 10.0.0.3 ping statistics --- 00:41:46.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.873 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:46.873 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:46.873 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:41:46.873 00:41:46.873 --- 10.0.0.4 ping statistics --- 00:41:46.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.873 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:46.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:46.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:41:46.873 00:41:46.873 --- 10.0.0.1 ping statistics --- 00:41:46.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.873 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:46.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:46.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:41:46.873 00:41:46.873 --- 10.0.0.2 ping statistics --- 00:41:46.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.873 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63103 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63103 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63103 ']' 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:46.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:46.873 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:47.131 [2024-12-09 09:53:54.382596] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:47.131 [2024-12-09 09:53:54.382703] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:47.131 [2024-12-09 09:53:54.533749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:47.131 [2024-12-09 09:53:54.565921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:47.131 [2024-12-09 09:53:54.566000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:47.131 [2024-12-09 09:53:54.566019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:47.131 [2024-12-09 09:53:54.566032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:47.132 [2024-12-09 09:53:54.566042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:47.132 [2024-12-09 09:53:54.566412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.132 [2024-12-09 09:53:54.596006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:47.388 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:47.389 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:41:47.389 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:47.389 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:47.389 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:47.389 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:47.389 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:47.648 [2024-12-09 09:53:54.968232] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:47.648 ************************************ 00:41:47.648 START TEST lvs_grow_clean 00:41:47.648 ************************************ 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:41:47.648 09:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:41:47.648 09:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:47.906 09:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:47.906 09:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:48.165 09:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ead6802e-c604-4691-b1d1-c1e602266266 00:41:48.165 09:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead6802e-c604-4691-b1d1-c1e602266266 00:41:48.165 09:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:48.730 09:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:48.730 09:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:48.730 09:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ead6802e-c604-4691-b1d1-c1e602266266 lvol 150 00:41:48.988 09:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=843323eb-fb16-4808-9a8d-63626f6a3976 00:41:48.988 09:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:41:48.988 09:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:49.247 [2024-12-09 09:53:56.548014] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:49.247 [2024-12-09 09:53:56.548114] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:49.247 true 00:41:49.247 09:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead6802e-c604-4691-b1d1-c1e602266266 00:41:49.247 09:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:49.505 09:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:49.505 09:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:50.071 09:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 843323eb-fb16-4808-9a8d-63626f6a3976 00:41:50.329 09:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:41:50.587 [2024-12-09 09:53:57.952856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:50.587 09:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63184 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63184 /var/tmp/bdevperf.sock 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63184 ']' 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:51.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:51.154 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:51.154 [2024-12-09 09:53:58.412209] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:41:51.154 [2024-12-09 09:53:58.412361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63184 ] 00:41:51.154 [2024-12-09 09:53:58.566583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:51.154 [2024-12-09 09:53:58.616030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:51.154 [2024-12-09 09:53:58.653437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:51.413 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:51.413 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:41:51.413 09:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:51.671 Nvme0n1 00:41:51.929 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:52.187 [ 00:41:52.187 { 00:41:52.187 "name": "Nvme0n1", 00:41:52.187 "aliases": [ 00:41:52.187 "843323eb-fb16-4808-9a8d-63626f6a3976" 00:41:52.187 ], 00:41:52.187 "product_name": "NVMe disk", 00:41:52.187 "block_size": 4096, 00:41:52.187 "num_blocks": 38912, 00:41:52.187 "uuid": "843323eb-fb16-4808-9a8d-63626f6a3976", 00:41:52.187 "numa_id": -1, 00:41:52.187 "assigned_rate_limits": { 00:41:52.187 "rw_ios_per_sec": 0, 00:41:52.187 "rw_mbytes_per_sec": 0, 00:41:52.187 "r_mbytes_per_sec": 0, 00:41:52.187 "w_mbytes_per_sec": 0 00:41:52.187 }, 00:41:52.187 "claimed": false, 00:41:52.187 "zoned": false, 00:41:52.187 "supported_io_types": { 00:41:52.187 "read": true, 00:41:52.187 "write": true, 00:41:52.187 "unmap": true, 00:41:52.187 "flush": true, 00:41:52.187 "reset": true, 00:41:52.187 "nvme_admin": true, 00:41:52.187 "nvme_io": true, 00:41:52.187 "nvme_io_md": false, 00:41:52.187 "write_zeroes": true, 00:41:52.187 "zcopy": false, 00:41:52.187 "get_zone_info": false, 00:41:52.187 "zone_management": false, 00:41:52.187 "zone_append": false, 00:41:52.187 "compare": true, 00:41:52.187 "compare_and_write": true, 00:41:52.187 "abort": true, 00:41:52.187 "seek_hole": false, 00:41:52.187 "seek_data": false, 00:41:52.187 "copy": true, 00:41:52.187 "nvme_iov_md": false 00:41:52.187 }, 00:41:52.187 "memory_domains": [ 00:41:52.187 { 00:41:52.187 "dma_device_id": "system", 00:41:52.187 "dma_device_type": 1 00:41:52.187 } 00:41:52.187 ], 00:41:52.187 "driver_specific": { 00:41:52.187 "nvme": [ 00:41:52.187 { 00:41:52.187 "trid": { 00:41:52.187 "trtype": "TCP", 00:41:52.187 "adrfam": "IPv4", 00:41:52.187 "traddr": "10.0.0.3", 00:41:52.187 "trsvcid": "4420", 00:41:52.187 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:52.187 }, 00:41:52.187 "ctrlr_data": { 00:41:52.187 "cntlid": 1, 00:41:52.187 "vendor_id": "0x8086", 00:41:52.187 "model_number": "SPDK bdev Controller", 00:41:52.187 "serial_number": "SPDK0", 00:41:52.187 "firmware_revision": "25.01", 00:41:52.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:52.187 "oacs": { 00:41:52.187 "security": 0, 00:41:52.187 "format": 0, 00:41:52.187 "firmware": 0, 00:41:52.187 "ns_manage": 0 00:41:52.187 }, 00:41:52.187 "multi_ctrlr": true, 00:41:52.187 "ana_reporting": false 00:41:52.187 }, 00:41:52.187 "vs": { 00:41:52.187 "nvme_version": "1.3" 00:41:52.187 }, 00:41:52.187 "ns_data": { 00:41:52.187 "id": 1, 00:41:52.187 "can_share": true 00:41:52.187 } 00:41:52.187 } 00:41:52.187 ], 00:41:52.187 "mp_policy": "active_passive" 00:41:52.187 } 00:41:52.187 } 00:41:52.187 ] 00:41:52.187 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63200 00:41:52.187 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:52.187 09:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:52.187 Running I/O for 10 seconds... 00:41:53.563 Latency(us) 00:41:53.563 [2024-12-09T09:54:01.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:53.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:53.563 Nvme0n1 : 1.00 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:41:53.563 [2024-12-09T09:54:01.065Z] =================================================================================================================== 00:41:53.563 [2024-12-09T09:54:01.065Z] Total : 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:41:53.563 00:41:54.129 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ead6802e-c604-4691-b1d1-c1e602266266 00:41:54.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:54.387 Nvme0n1 : 2.00 5270.50 20.59 0.00 0.00 0.00 0.00 0.00 00:41:54.387 [2024-12-09T09:54:01.889Z] =================================================================================================================== 00:41:54.387 [2024-12-09T09:54:01.889Z] Total : 5270.50 20.59 0.00 0.00 0.00 0.00 0.00 00:41:54.387 00:41:54.646 true 00:41:54.646 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead6802e-c604-4691-b1d1-c1e602266266 00:41:54.646 09:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:54.904 09:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:54.904 09:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:54.904 09:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63200 00:41:55.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:55.162 Nvme0n1 : 3.00 5418.67 21.17 0.00 0.00 0.00 0.00 0.00 00:41:55.162 [2024-12-09T09:54:02.664Z] =================================================================================================================== 00:41:55.162 [2024-12-09T09:54:02.664Z] Total : 5418.67 21.17 0.00 0.00 0.00 0.00 0.00 00:41:55.162 00:41:56.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:56.536 Nvme0n1 : 4.00 5270.50 20.59 0.00 0.00 0.00 0.00 0.00 00:41:56.536 [2024-12-09T09:54:04.038Z] =================================================================================================================== 00:41:56.536 [2024-12-09T09:54:04.038Z] Total : 5270.50 20.59 0.00 0.00 0.00 0.00 0.00 00:41:56.536 00:41:57.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:57.470 Nvme0n1 : 5.00 5283.20 20.64 0.00 0.00 0.00 0.00 0.00 00:41:57.470 [2024-12-09T09:54:04.972Z] =================================================================================================================== 00:41:57.470 [2024-12-09T09:54:04.972Z] Total : 5283.20 20.64 0.00 0.00 0.00 0.00 0.00 00:41:57.470 00:41:58.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:58.414 Nvme0n1 : 6.00 5423.33 21.18 0.00 0.00 0.00 0.00 0.00 00:41:58.414 [2024-12-09T09:54:05.916Z] =================================================================================================================== 00:41:58.414 [2024-12-09T09:54:05.916Z] Total : 5423.33 21.18 0.00 0.00 0.00 0.00 0.00 00:41:58.414 00:41:59.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:59.350 Nvme0n1 : 7.00 5501.29 21.49 0.00 0.00 0.00 0.00 0.00 00:41:59.350 [2024-12-09T09:54:06.852Z] =================================================================================================================== 00:41:59.350 [2024-12-09T09:54:06.852Z] Total : 5501.29 21.49 0.00 0.00 0.00 0.00 0.00 00:41:59.350 00:42:00.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:00.311 Nvme0n1 : 8.00 5591.50 21.84 0.00 0.00 0.00 0.00 0.00 00:42:00.311 [2024-12-09T09:54:07.813Z] =================================================================================================================== 00:42:00.311 [2024-12-09T09:54:07.813Z] Total : 5591.50 21.84 0.00 0.00 0.00 0.00 0.00 00:42:00.311 00:42:01.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:01.248 Nvme0n1 : 9.00 5591.11 21.84 0.00 0.00 0.00 0.00 0.00 00:42:01.248 [2024-12-09T09:54:08.750Z] =================================================================================================================== 00:42:01.248 [2024-12-09T09:54:08.750Z] Total : 5591.11 21.84 0.00 0.00 0.00 0.00 0.00 00:42:01.248 00:42:02.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:02.333 Nvme0n1 : 10.00 5667.00 22.14 0.00 0.00 0.00 0.00 0.00 00:42:02.333 [2024-12-09T09:54:09.835Z] =================================================================================================================== 00:42:02.333 [2024-12-09T09:54:09.835Z] Total : 5667.00 22.14 0.00 0.00 0.00 0.00 0.00 00:42:02.333 00:42:02.333 00:42:02.333 Latency(us) 00:42:02.333 [2024-12-09T09:54:09.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:02.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:02.333 Nvme0n1 : 10.02 5668.38 22.14 0.00 0.00 22574.18 12928.47 77689.95 00:42:02.333 [2024-12-09T09:54:09.835Z] =================================================================================================================== 00:42:02.333 [2024-12-09T09:54:09.835Z] Total : 5668.38 22.14 0.00 0.00 22574.18 12928.47 77689.95 00:42:02.333 { 00:42:02.333 "results": [ 00:42:02.333 { 00:42:02.333 "job": "Nvme0n1", 00:42:02.333 "core_mask": "0x2", 00:42:02.333 "workload": "randwrite", 00:42:02.333 "status": "finished", 00:42:02.333 "queue_depth": 128, 00:42:02.333 "io_size": 4096, 00:42:02.333 "runtime": 10.020151, 00:42:02.333 "iops": 5668.377652193066, 00:42:02.333 "mibps": 22.142100203879163, 00:42:02.333 "io_failed": 0, 00:42:02.333 "io_timeout": 0, 00:42:02.333 "avg_latency_us": 22574.180223759482, 00:42:02.333 "min_latency_us": 12928.465454545454, 00:42:02.333 "max_latency_us": 77689.9490909091 00:42:02.333 } 00:42:02.333 ], 00:42:02.333 "core_count": 1 00:42:02.333 } 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63184 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63184 ']' 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63184 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63184 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63184' 00:42:02.333 killing process with pid 63184 00:42:02.333 Received shutdown signal, test time was about 10.000000 seconds 00:42:02.333 00:42:02.333 Latency(us) 00:42:02.333 [2024-12-09T09:54:09.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:02.333 [2024-12-09T09:54:09.835Z] =================================================================================================================== 00:42:02.333 [2024-12-09T09:54:09.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63184 00:42:02.333 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63184 00:42:02.601 09:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:42:02.860 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:03.426 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead6802e-c604-4691-b1d1-c1e602266266 00:42:03.426 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:03.426 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:03.426 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:42:03.426 09:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:03.993 [2024-12-09 09:54:11.290691] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead6802e-c604-4691-b1d1-c1e602266266 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead6802e-c604-4691-b1d1-c1e602266266 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:42:03.993 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead6802e-c604-4691-b1d1-c1e602266266 00:42:04.252 request: 00:42:04.252 { 00:42:04.252 "uuid": "ead6802e-c604-4691-b1d1-c1e602266266", 00:42:04.252 "method": "bdev_lvol_get_lvstores", 00:42:04.252 "req_id": 1 00:42:04.252 } 00:42:04.252 Got JSON-RPC error response 00:42:04.252 response: 00:42:04.252 { 00:42:04.252 "code": -19, 00:42:04.252 "message": "No such device" 00:42:04.252 } 00:42:04.252 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:42:04.252 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:04.252 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:04.252 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:04.252 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:04.511 aio_bdev 00:42:04.511 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 843323eb-fb16-4808-9a8d-63626f6a3976 00:42:04.511 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=843323eb-fb16-4808-9a8d-63626f6a3976 00:42:04.511 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:04.511 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:42:04.511 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:04.511 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:04.511 09:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:04.770 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 843323eb-fb16-4808-9a8d-63626f6a3976 -t 2000 00:42:05.029 [ 00:42:05.029 { 00:42:05.029 "name": "843323eb-fb16-4808-9a8d-63626f6a3976", 00:42:05.029 "aliases": [ 00:42:05.029 "lvs/lvol" 00:42:05.029 ], 00:42:05.029 "product_name": "Logical Volume", 00:42:05.029 "block_size": 4096, 00:42:05.029 "num_blocks": 38912, 00:42:05.029 "uuid": "843323eb-fb16-4808-9a8d-63626f6a3976", 00:42:05.029 "assigned_rate_limits": { 00:42:05.029 "rw_ios_per_sec": 0, 00:42:05.029 "rw_mbytes_per_sec": 0, 00:42:05.029 "r_mbytes_per_sec": 0, 00:42:05.029 "w_mbytes_per_sec": 0 00:42:05.029 }, 00:42:05.029 "claimed": false, 00:42:05.029 "zoned": false, 00:42:05.029 "supported_io_types": { 00:42:05.029 "read": true, 00:42:05.029 "write": true, 00:42:05.029 "unmap": true, 00:42:05.029 "flush": false, 00:42:05.029 "reset": true, 00:42:05.029 "nvme_admin": false, 00:42:05.029 "nvme_io": false, 00:42:05.029 "nvme_io_md": false, 00:42:05.029 "write_zeroes": true, 00:42:05.029 "zcopy": false, 00:42:05.029 "get_zone_info": false, 00:42:05.029 "zone_management": false, 00:42:05.029 "zone_append": false, 00:42:05.029 "compare": false, 00:42:05.029 "compare_and_write": false, 00:42:05.029 "abort": false, 00:42:05.029 "seek_hole": true, 00:42:05.029 "seek_data": true, 00:42:05.029 "copy": false, 00:42:05.029 "nvme_iov_md": false 00:42:05.029 }, 00:42:05.029 "driver_specific": { 00:42:05.029 "lvol": { 00:42:05.029 "lvol_store_uuid": "ead6802e-c604-4691-b1d1-c1e602266266", 00:42:05.029 "base_bdev": "aio_bdev", 00:42:05.029 "thin_provision": false, 00:42:05.029 "num_allocated_clusters": 38, 00:42:05.029 "snapshot": false, 00:42:05.029 "clone": false, 00:42:05.029 "esnap_clone": false 00:42:05.029 } 00:42:05.029 } 00:42:05.029 } 00:42:05.029 ] 00:42:05.029 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:42:05.029 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead6802e-c604-4691-b1d1-c1e602266266 00:42:05.029 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:05.597 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:05.597 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead6802e-c604-4691-b1d1-c1e602266266 00:42:05.597 09:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:05.597 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:05.597 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 843323eb-fb16-4808-9a8d-63626f6a3976 00:42:06.166 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ead6802e-c604-4691-b1d1-c1e602266266 00:42:06.424 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:06.682 09:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:42:06.941 ************************************ 00:42:06.941 END TEST lvs_grow_clean 00:42:06.941 ************************************ 00:42:06.941 00:42:06.941 real 0m19.438s 00:42:06.941 user 0m18.504s 00:42:06.941 sys 0m2.715s 00:42:06.941 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.941 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:07.200 ************************************ 00:42:07.200 START TEST lvs_grow_dirty 00:42:07.200 ************************************ 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:42:07.200 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:07.459 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:42:07.459 09:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:42:07.718 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:07.718 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:07.718 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:42:08.288 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:42:08.288 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:42:08.288 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 lvol 150 00:42:08.547 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=040d8359-dab6-44f8-824f-f78ff9f64f34 00:42:08.547 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:42:08.547 09:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:42:08.805 [2024-12-09 09:54:16.192021] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:42:08.805 [2024-12-09 09:54:16.192119] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:42:08.805 true 00:42:08.805 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:08.805 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:42:09.371 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:42:09.371 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:09.629 09:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 040d8359-dab6-44f8-824f-f78ff9f64f34 00:42:09.887 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:42:10.145 [2024-12-09 09:54:17.524706] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:10.145 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:42:10.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63469 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63469 /var/tmp/bdevperf.sock 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63469 ']' 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:10.403 09:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:10.661 [2024-12-09 09:54:17.942508] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:10.661 [2024-12-09 09:54:17.942940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63469 ] 00:42:10.661 [2024-12-09 09:54:18.098978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.661 [2024-12-09 09:54:18.140609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:10.919 [2024-12-09 09:54:18.171510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:10.919 09:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:10.919 09:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:42:10.919 09:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:42:11.486 Nvme0n1 00:42:11.486 09:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:42:11.744 [ 00:42:11.744 { 00:42:11.744 "name": "Nvme0n1", 00:42:11.744 "aliases": [ 00:42:11.744 "040d8359-dab6-44f8-824f-f78ff9f64f34" 00:42:11.744 ], 00:42:11.744 "product_name": "NVMe disk", 00:42:11.744 "block_size": 4096, 00:42:11.744 "num_blocks": 38912, 00:42:11.744 "uuid": "040d8359-dab6-44f8-824f-f78ff9f64f34", 00:42:11.744 "numa_id": -1, 00:42:11.744 "assigned_rate_limits": { 00:42:11.744 "rw_ios_per_sec": 0, 00:42:11.744 "rw_mbytes_per_sec": 0, 00:42:11.744 "r_mbytes_per_sec": 0, 00:42:11.744 "w_mbytes_per_sec": 0 00:42:11.744 }, 00:42:11.744 "claimed": false, 00:42:11.744 "zoned": false, 00:42:11.744 "supported_io_types": { 00:42:11.744 "read": true, 00:42:11.744 "write": true, 00:42:11.744 "unmap": true, 00:42:11.744 "flush": true, 00:42:11.744 "reset": true, 00:42:11.744 "nvme_admin": true, 00:42:11.744 "nvme_io": true, 00:42:11.744 "nvme_io_md": false, 00:42:11.744 "write_zeroes": true, 00:42:11.744 "zcopy": false, 00:42:11.744 "get_zone_info": false, 00:42:11.744 "zone_management": false, 00:42:11.744 "zone_append": false, 00:42:11.744 "compare": true, 00:42:11.744 "compare_and_write": true, 00:42:11.744 "abort": true, 00:42:11.744 "seek_hole": false, 00:42:11.744 "seek_data": false, 00:42:11.744 "copy": true, 00:42:11.744 "nvme_iov_md": false 00:42:11.744 }, 00:42:11.744 "memory_domains": [ 00:42:11.744 { 00:42:11.744 "dma_device_id": "system", 00:42:11.744 "dma_device_type": 1 00:42:11.744 } 00:42:11.744 ], 00:42:11.744 "driver_specific": { 00:42:11.744 "nvme": [ 00:42:11.744 { 00:42:11.744 "trid": { 00:42:11.744 "trtype": "TCP", 00:42:11.744 "adrfam": "IPv4", 00:42:11.744 "traddr": "10.0.0.3", 00:42:11.744 "trsvcid": "4420", 00:42:11.744 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:42:11.744 }, 00:42:11.744 "ctrlr_data": { 00:42:11.744 "cntlid": 1, 00:42:11.744 "vendor_id": "0x8086", 00:42:11.744 "model_number": "SPDK bdev Controller", 00:42:11.744 "serial_number": "SPDK0", 00:42:11.744 "firmware_revision": "25.01", 00:42:11.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:11.744 "oacs": { 00:42:11.744 "security": 0, 00:42:11.744 "format": 0, 00:42:11.744 "firmware": 0, 00:42:11.744 "ns_manage": 0 00:42:11.744 }, 00:42:11.744 "multi_ctrlr": true, 00:42:11.744 "ana_reporting": false 00:42:11.744 }, 00:42:11.744 "vs": { 00:42:11.744 "nvme_version": "1.3" 00:42:11.744 }, 00:42:11.744 "ns_data": { 00:42:11.744 "id": 1, 00:42:11.744 "can_share": true 00:42:11.744 } 00:42:11.744 } 00:42:11.744 ], 00:42:11.744 "mp_policy": "active_passive" 00:42:11.744 } 00:42:11.744 } 00:42:11.744 ] 00:42:12.002 09:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63491 00:42:12.002 09:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:12.002 09:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:42:12.002 Running I/O for 10 seconds... 00:42:12.936 Latency(us) 00:42:12.936 [2024-12-09T09:54:20.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:12.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:12.936 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:42:12.936 [2024-12-09T09:54:20.438Z] =================================================================================================================== 00:42:12.936 [2024-12-09T09:54:20.438Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:42:12.936 00:42:13.870 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:14.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:14.128 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:42:14.128 [2024-12-09T09:54:21.630Z] =================================================================================================================== 00:42:14.128 [2024-12-09T09:54:21.630Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:42:14.128 00:42:14.128 true 00:42:14.387 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:14.387 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:42:14.646 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:42:14.646 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:42:14.646 09:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63491 00:42:15.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:15.213 Nvme0n1 : 3.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:42:15.213 [2024-12-09T09:54:22.715Z] =================================================================================================================== 00:42:15.213 [2024-12-09T09:54:22.715Z] Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:42:15.213 00:42:16.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:16.148 Nvme0n1 : 4.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:42:16.148 [2024-12-09T09:54:23.650Z] =================================================================================================================== 00:42:16.148 [2024-12-09T09:54:23.650Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:42:16.148 00:42:17.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:17.084 Nvme0n1 : 5.00 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:42:17.084 [2024-12-09T09:54:24.586Z] =================================================================================================================== 00:42:17.084 [2024-12-09T09:54:24.586Z] Total : 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:42:17.084 00:42:18.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:18.019 Nvme0n1 : 6.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:42:18.019 [2024-12-09T09:54:25.521Z] =================================================================================================================== 00:42:18.019 [2024-12-09T09:54:25.521Z] Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:42:18.019 00:42:18.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:18.953 Nvme0n1 : 7.00 6712.86 26.22 0.00 0.00 0.00 0.00 0.00 00:42:18.953 [2024-12-09T09:54:26.455Z] =================================================================================================================== 00:42:18.953 [2024-12-09T09:54:26.455Z] Total : 6712.86 26.22 0.00 0.00 0.00 0.00 0.00 00:42:18.953 00:42:20.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:20.352 Nvme0n1 : 8.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:42:20.352 [2024-12-09T09:54:27.854Z] =================================================================================================================== 00:42:20.352 [2024-12-09T09:54:27.854Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:42:20.352 00:42:21.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:21.290 Nvme0n1 : 9.00 6745.11 26.35 0.00 0.00 0.00 0.00 0.00 00:42:21.290 [2024-12-09T09:54:28.792Z] =================================================================================================================== 00:42:21.290 [2024-12-09T09:54:28.792Z] Total : 6745.11 26.35 0.00 0.00 0.00 0.00 0.00 00:42:21.290 00:42:22.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:22.225 Nvme0n1 : 10.00 6769.10 26.44 0.00 0.00 0.00 0.00 0.00 00:42:22.225 [2024-12-09T09:54:29.727Z] =================================================================================================================== 00:42:22.225 [2024-12-09T09:54:29.727Z] Total : 6769.10 26.44 0.00 0.00 0.00 0.00 0.00 00:42:22.225 00:42:22.225 00:42:22.225 Latency(us) 00:42:22.225 [2024-12-09T09:54:29.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:22.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:22.225 Nvme0n1 : 10.00 6766.23 26.43 0.00 0.00 18910.20 15371.17 205902.20 00:42:22.225 [2024-12-09T09:54:29.727Z] =================================================================================================================== 00:42:22.225 [2024-12-09T09:54:29.728Z] Total : 6766.23 26.43 0.00 0.00 18910.20 15371.17 205902.20 00:42:22.226 { 00:42:22.226 "results": [ 00:42:22.226 { 00:42:22.226 "job": "Nvme0n1", 00:42:22.226 "core_mask": "0x2", 00:42:22.226 "workload": "randwrite", 00:42:22.226 "status": "finished", 00:42:22.226 "queue_depth": 128, 00:42:22.226 "io_size": 4096, 00:42:22.226 "runtime": 10.004395, 00:42:22.226 "iops": 6766.226243565953, 00:42:22.226 "mibps": 26.430571263929505, 00:42:22.226 "io_failed": 0, 00:42:22.226 "io_timeout": 0, 00:42:22.226 "avg_latency_us": 18910.196937787736, 00:42:22.226 "min_latency_us": 15371.17090909091, 00:42:22.226 "max_latency_us": 205902.19636363635 00:42:22.226 } 00:42:22.226 ], 00:42:22.226 "core_count": 1 00:42:22.226 } 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63469 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63469 ']' 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63469 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63469 00:42:22.226 killing process with pid 63469 00:42:22.226 Received shutdown signal, test time was about 10.000000 seconds 00:42:22.226 00:42:22.226 Latency(us) 00:42:22.226 [2024-12-09T09:54:29.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:22.226 [2024-12-09T09:54:29.728Z] =================================================================================================================== 00:42:22.226 [2024-12-09T09:54:29.728Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63469' 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63469 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63469 00:42:22.226 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:42:22.484 09:54:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:22.743 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:22.743 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63103 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63103 00:42:23.341 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63103 Killed "${NVMF_APP[@]}" "$@" 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63624 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63624 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63624 ']' 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:23.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:23.341 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:23.341 [2024-12-09 09:54:30.647893] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:23.341 [2024-12-09 09:54:30.648024] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:23.341 [2024-12-09 09:54:30.805016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.341 [2024-12-09 09:54:30.836337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:23.341 [2024-12-09 09:54:30.836399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:23.341 [2024-12-09 09:54:30.836412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:23.341 [2024-12-09 09:54:30.836421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:23.341 [2024-12-09 09:54:30.836428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:23.341 [2024-12-09 09:54:30.836747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.600 [2024-12-09 09:54:30.866085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:23.600 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:23.600 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:42:23.600 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:23.600 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:23.600 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:23.600 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:23.600 09:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:23.859 [2024-12-09 09:54:31.268678] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:42:23.859 [2024-12-09 09:54:31.269150] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:42:23.859 [2024-12-09 09:54:31.269438] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:42:23.859 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:42:23.859 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 040d8359-dab6-44f8-824f-f78ff9f64f34 00:42:23.859 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=040d8359-dab6-44f8-824f-f78ff9f64f34 00:42:23.859 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:23.859 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:42:23.859 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:23.859 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:23.859 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:24.427 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 040d8359-dab6-44f8-824f-f78ff9f64f34 -t 2000 00:42:24.427 [ 00:42:24.427 { 00:42:24.427 "name": "040d8359-dab6-44f8-824f-f78ff9f64f34", 00:42:24.427 "aliases": [ 00:42:24.427 "lvs/lvol" 00:42:24.427 ], 00:42:24.427 "product_name": "Logical Volume", 00:42:24.427 "block_size": 4096, 00:42:24.427 "num_blocks": 38912, 00:42:24.427 "uuid": "040d8359-dab6-44f8-824f-f78ff9f64f34", 00:42:24.427 "assigned_rate_limits": { 00:42:24.427 "rw_ios_per_sec": 0, 00:42:24.427 "rw_mbytes_per_sec": 0, 00:42:24.427 "r_mbytes_per_sec": 0, 00:42:24.427 "w_mbytes_per_sec": 0 00:42:24.427 }, 00:42:24.427 "claimed": false, 00:42:24.427 "zoned": false, 00:42:24.427 "supported_io_types": { 00:42:24.427 "read": true, 00:42:24.427 "write": true, 00:42:24.427 "unmap": true, 00:42:24.427 "flush": false, 00:42:24.427 "reset": true, 00:42:24.427 "nvme_admin": false, 00:42:24.427 "nvme_io": false, 00:42:24.427 "nvme_io_md": false, 00:42:24.427 "write_zeroes": true, 00:42:24.427 "zcopy": false, 00:42:24.427 "get_zone_info": false, 00:42:24.427 "zone_management": false, 00:42:24.427 "zone_append": false, 00:42:24.427 "compare": false, 00:42:24.427 "compare_and_write": false, 00:42:24.427 "abort": false, 00:42:24.427 "seek_hole": true, 00:42:24.427 "seek_data": true, 00:42:24.427 "copy": false, 00:42:24.427 "nvme_iov_md": false 00:42:24.427 }, 00:42:24.427 "driver_specific": { 00:42:24.427 "lvol": { 00:42:24.427 "lvol_store_uuid": "a6c2c080-8f0f-44b6-ad49-93211e9e6cb0", 00:42:24.427 "base_bdev": "aio_bdev", 00:42:24.427 "thin_provision": false, 00:42:24.427 "num_allocated_clusters": 38, 00:42:24.427 "snapshot": false, 00:42:24.427 "clone": false, 00:42:24.427 "esnap_clone": false 00:42:24.427 } 00:42:24.427 } 00:42:24.427 } 00:42:24.427 ] 00:42:24.685 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:42:24.685 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:24.685 09:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:42:24.944 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:42:24.944 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:24.944 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:42:25.203 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:42:25.203 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:25.462 [2024-12-09 09:54:32.826447] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:42:25.462 09:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:25.721 request: 00:42:25.721 { 00:42:25.721 "uuid": "a6c2c080-8f0f-44b6-ad49-93211e9e6cb0", 00:42:25.721 "method": "bdev_lvol_get_lvstores", 00:42:25.721 "req_id": 1 00:42:25.721 } 00:42:25.721 Got JSON-RPC error response 00:42:25.721 response: 00:42:25.721 { 00:42:25.721 "code": -19, 00:42:25.721 "message": "No such device" 00:42:25.721 } 00:42:25.721 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:42:25.721 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:25.979 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:25.979 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:25.979 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:26.238 aio_bdev 00:42:26.238 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 040d8359-dab6-44f8-824f-f78ff9f64f34 00:42:26.238 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=040d8359-dab6-44f8-824f-f78ff9f64f34 00:42:26.238 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:26.238 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:42:26.238 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:26.238 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:26.238 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:26.497 09:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 040d8359-dab6-44f8-824f-f78ff9f64f34 -t 2000 00:42:26.755 [ 00:42:26.755 { 00:42:26.755 "name": "040d8359-dab6-44f8-824f-f78ff9f64f34", 00:42:26.755 "aliases": [ 00:42:26.755 "lvs/lvol" 00:42:26.755 ], 00:42:26.755 "product_name": "Logical Volume", 00:42:26.755 "block_size": 4096, 00:42:26.755 "num_blocks": 38912, 00:42:26.755 "uuid": "040d8359-dab6-44f8-824f-f78ff9f64f34", 00:42:26.755 "assigned_rate_limits": { 00:42:26.755 "rw_ios_per_sec": 0, 00:42:26.755 "rw_mbytes_per_sec": 0, 00:42:26.755 "r_mbytes_per_sec": 0, 00:42:26.755 "w_mbytes_per_sec": 0 00:42:26.755 }, 00:42:26.755 "claimed": false, 00:42:26.755 "zoned": false, 00:42:26.755 "supported_io_types": { 00:42:26.755 "read": true, 00:42:26.755 "write": true, 00:42:26.755 "unmap": true, 00:42:26.755 "flush": false, 00:42:26.755 "reset": true, 00:42:26.755 "nvme_admin": false, 00:42:26.755 "nvme_io": false, 00:42:26.755 "nvme_io_md": false, 00:42:26.755 "write_zeroes": true, 00:42:26.755 "zcopy": false, 00:42:26.755 "get_zone_info": false, 00:42:26.755 "zone_management": false, 00:42:26.755 "zone_append": false, 00:42:26.755 "compare": false, 00:42:26.755 "compare_and_write": false, 00:42:26.755 "abort": false, 00:42:26.755 "seek_hole": true, 00:42:26.755 "seek_data": true, 00:42:26.755 "copy": false, 00:42:26.755 "nvme_iov_md": false 00:42:26.755 }, 00:42:26.755 "driver_specific": { 00:42:26.755 "lvol": { 00:42:26.755 "lvol_store_uuid": "a6c2c080-8f0f-44b6-ad49-93211e9e6cb0", 00:42:26.755 "base_bdev": "aio_bdev", 00:42:26.755 "thin_provision": false, 00:42:26.755 "num_allocated_clusters": 38, 00:42:26.755 "snapshot": false, 00:42:26.755 "clone": false, 00:42:26.755 "esnap_clone": false 00:42:26.755 } 00:42:26.755 } 00:42:26.755 } 00:42:26.755 ] 00:42:26.755 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:42:26.755 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:26.755 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:27.014 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:27.014 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:27.014 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:27.582 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:27.582 09:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 040d8359-dab6-44f8-824f-f78ff9f64f34 00:42:27.582 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6c2c080-8f0f-44b6-ad49-93211e9e6cb0 00:42:28.147 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:28.147 09:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:42:28.714 ************************************ 00:42:28.714 END TEST lvs_grow_dirty 00:42:28.714 ************************************ 00:42:28.714 00:42:28.714 real 0m21.572s 00:42:28.714 user 0m46.134s 00:42:28.714 sys 0m7.878s 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:42:28.714 nvmf_trace.0 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:28.714 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:28.973 rmmod nvme_tcp 00:42:28.973 rmmod nvme_fabrics 00:42:28.973 rmmod nvme_keyring 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63624 ']' 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63624 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63624 ']' 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63624 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:28.973 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63624 00:42:29.232 killing process with pid 63624 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63624' 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63624 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63624 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:29.232 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:42:29.491 00:42:29.491 real 0m43.256s 00:42:29.491 user 1m11.316s 00:42:29.491 sys 0m11.502s 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:29.491 ************************************ 00:42:29.491 END TEST nvmf_lvs_grow 00:42:29.491 ************************************ 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:29.491 ************************************ 00:42:29.491 START TEST nvmf_bdev_io_wait 00:42:29.491 ************************************ 00:42:29.491 09:54:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:42:29.752 * Looking for test storage... 00:42:29.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.752 --rc genhtml_branch_coverage=1 00:42:29.752 --rc genhtml_function_coverage=1 00:42:29.752 --rc genhtml_legend=1 00:42:29.752 --rc geninfo_all_blocks=1 00:42:29.752 --rc geninfo_unexecuted_blocks=1 00:42:29.752 00:42:29.752 ' 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.752 --rc genhtml_branch_coverage=1 00:42:29.752 --rc genhtml_function_coverage=1 00:42:29.752 --rc genhtml_legend=1 00:42:29.752 --rc geninfo_all_blocks=1 00:42:29.752 --rc geninfo_unexecuted_blocks=1 00:42:29.752 00:42:29.752 ' 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.752 --rc genhtml_branch_coverage=1 00:42:29.752 --rc genhtml_function_coverage=1 00:42:29.752 --rc genhtml_legend=1 00:42:29.752 --rc geninfo_all_blocks=1 00:42:29.752 --rc geninfo_unexecuted_blocks=1 00:42:29.752 00:42:29.752 ' 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.752 --rc genhtml_branch_coverage=1 00:42:29.752 --rc genhtml_function_coverage=1 00:42:29.752 --rc genhtml_legend=1 00:42:29.752 --rc geninfo_all_blocks=1 00:42:29.752 --rc geninfo_unexecuted_blocks=1 00:42:29.752 00:42:29.752 ' 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:29.752 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:29.753 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:29.753 Cannot find device "nvmf_init_br" 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:29.753 Cannot find device "nvmf_init_br2" 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:29.753 Cannot find device "nvmf_tgt_br" 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:29.753 Cannot find device "nvmf_tgt_br2" 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:29.753 Cannot find device "nvmf_init_br" 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:29.753 Cannot find device "nvmf_init_br2" 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:42:29.753 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:29.753 Cannot find device "nvmf_tgt_br" 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:30.016 Cannot find device "nvmf_tgt_br2" 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:30.016 Cannot find device "nvmf_br" 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:30.016 Cannot find device "nvmf_init_if" 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:30.016 Cannot find device "nvmf_init_if2" 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:30.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:30.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:30.016 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:30.285 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:30.285 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:30.285 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:30.285 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:30.285 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:30.285 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:30.285 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:30.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:30.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:42:30.286 00:42:30.286 --- 10.0.0.3 ping statistics --- 00:42:30.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.286 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:30.286 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:30.286 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:42:30.286 00:42:30.286 --- 10.0.0.4 ping statistics --- 00:42:30.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.286 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:30.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:30.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:42:30.286 00:42:30.286 --- 10.0.0.1 ping statistics --- 00:42:30.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.286 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:30.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:30.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:42:30.286 00:42:30.286 --- 10.0.0.2 ping statistics --- 00:42:30.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.286 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63995 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63995 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63995 ']' 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:30.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:30.286 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.286 [2024-12-09 09:54:37.712270] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:30.286 [2024-12-09 09:54:37.712373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:30.544 [2024-12-09 09:54:37.862069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:30.544 [2024-12-09 09:54:37.896533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:30.544 [2024-12-09 09:54:37.896597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:30.544 [2024-12-09 09:54:37.896623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:30.544 [2024-12-09 09:54:37.896639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:30.544 [2024-12-09 09:54:37.896651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:30.544 [2024-12-09 09:54:37.897606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:30.545 [2024-12-09 09:54:37.897714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:30.545 [2024-12-09 09:54:37.897773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:30.545 [2024-12-09 09:54:37.897782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:30.545 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:30.545 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:42:30.545 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:30.545 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:30.545 09:54:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.545 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:30.545 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:42:30.545 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.545 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.545 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.545 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:42:30.545 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.545 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.805 [2024-12-09 09:54:38.077260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.805 [2024-12-09 09:54:38.088114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.805 Malloc0 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.805 [2024-12-09 09:54:38.135077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64021 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64023 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64025 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:30.805 { 00:42:30.805 "params": { 00:42:30.805 "name": "Nvme$subsystem", 00:42:30.805 "trtype": "$TEST_TRANSPORT", 00:42:30.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.805 "adrfam": "ipv4", 00:42:30.805 "trsvcid": "$NVMF_PORT", 00:42:30.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.805 "hdgst": ${hdgst:-false}, 00:42:30.805 "ddgst": ${ddgst:-false} 00:42:30.805 }, 00:42:30.805 "method": "bdev_nvme_attach_controller" 00:42:30.805 } 00:42:30.805 EOF 00:42:30.805 )") 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:30.805 { 00:42:30.805 "params": { 00:42:30.805 "name": "Nvme$subsystem", 00:42:30.805 "trtype": "$TEST_TRANSPORT", 00:42:30.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.805 "adrfam": "ipv4", 00:42:30.805 "trsvcid": "$NVMF_PORT", 00:42:30.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.805 "hdgst": ${hdgst:-false}, 00:42:30.805 "ddgst": ${ddgst:-false} 00:42:30.805 }, 00:42:30.805 "method": "bdev_nvme_attach_controller" 00:42:30.805 } 00:42:30.805 EOF 00:42:30.805 )") 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64026 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:30.805 { 00:42:30.805 "params": { 00:42:30.805 "name": "Nvme$subsystem", 00:42:30.805 "trtype": "$TEST_TRANSPORT", 00:42:30.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.805 "adrfam": "ipv4", 00:42:30.805 "trsvcid": "$NVMF_PORT", 00:42:30.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.805 "hdgst": ${hdgst:-false}, 00:42:30.805 "ddgst": ${ddgst:-false} 00:42:30.805 }, 00:42:30.805 "method": "bdev_nvme_attach_controller" 00:42:30.805 } 00:42:30.805 EOF 00:42:30.805 )") 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:30.805 "params": { 00:42:30.805 "name": "Nvme1", 00:42:30.805 "trtype": "tcp", 00:42:30.805 "traddr": "10.0.0.3", 00:42:30.805 "adrfam": "ipv4", 00:42:30.805 "trsvcid": "4420", 00:42:30.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:30.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:30.805 "hdgst": false, 00:42:30.805 "ddgst": false 00:42:30.805 }, 00:42:30.805 "method": "bdev_nvme_attach_controller" 00:42:30.805 }' 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:30.805 { 00:42:30.805 "params": { 00:42:30.805 "name": "Nvme$subsystem", 00:42:30.805 "trtype": "$TEST_TRANSPORT", 00:42:30.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.805 "adrfam": "ipv4", 00:42:30.805 "trsvcid": "$NVMF_PORT", 00:42:30.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.805 "hdgst": ${hdgst:-false}, 00:42:30.805 "ddgst": ${ddgst:-false} 00:42:30.805 }, 00:42:30.805 "method": "bdev_nvme_attach_controller" 00:42:30.805 } 00:42:30.805 EOF 00:42:30.805 )") 00:42:30.805 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:30.806 "params": { 00:42:30.806 "name": "Nvme1", 00:42:30.806 "trtype": "tcp", 00:42:30.806 "traddr": "10.0.0.3", 00:42:30.806 "adrfam": "ipv4", 00:42:30.806 "trsvcid": "4420", 00:42:30.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:30.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:30.806 "hdgst": false, 00:42:30.806 "ddgst": false 00:42:30.806 }, 00:42:30.806 "method": "bdev_nvme_attach_controller" 00:42:30.806 }' 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:30.806 "params": { 00:42:30.806 "name": "Nvme1", 00:42:30.806 "trtype": "tcp", 00:42:30.806 "traddr": "10.0.0.3", 00:42:30.806 "adrfam": "ipv4", 00:42:30.806 "trsvcid": "4420", 00:42:30.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:30.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:30.806 "hdgst": false, 00:42:30.806 "ddgst": false 00:42:30.806 }, 00:42:30.806 "method": "bdev_nvme_attach_controller" 00:42:30.806 }' 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:30.806 "params": { 00:42:30.806 "name": "Nvme1", 00:42:30.806 "trtype": "tcp", 00:42:30.806 "traddr": "10.0.0.3", 00:42:30.806 "adrfam": "ipv4", 00:42:30.806 "trsvcid": "4420", 00:42:30.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:30.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:30.806 "hdgst": false, 00:42:30.806 "ddgst": false 00:42:30.806 }, 00:42:30.806 "method": "bdev_nvme_attach_controller" 00:42:30.806 }' 00:42:30.806 09:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64021 00:42:30.806 [2024-12-09 09:54:38.231901] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:30.806 [2024-12-09 09:54:38.232001] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:42:30.806 [2024-12-09 09:54:38.241293] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:30.806 [2024-12-09 09:54:38.241394] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:42:30.806 [2024-12-09 09:54:38.245525] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:30.806 [2024-12-09 09:54:38.245622] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:42:30.806 [2024-12-09 09:54:38.245591] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:30.806 [2024-12-09 09:54:38.246594] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:42:31.064 [2024-12-09 09:54:38.417880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.064 [2024-12-09 09:54:38.444995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:31.064 [2024-12-09 09:54:38.453508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.064 [2024-12-09 09:54:38.458748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:31.064 [2024-12-09 09:54:38.492840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:31.064 [2024-12-09 09:54:38.503656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.064 [2024-12-09 09:54:38.506474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:31.064 [2024-12-09 09:54:38.543743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:31.064 [2024-12-09 09:54:38.560558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:31.064 [2024-12-09 09:54:38.561384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.323 Running I/O for 1 seconds... 00:42:31.323 [2024-12-09 09:54:38.602132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:42:31.323 Running I/O for 1 seconds... 00:42:31.323 [2024-12-09 09:54:38.619141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:31.323 Running I/O for 1 seconds... 00:42:31.323 Running I/O for 1 seconds... 00:42:32.258 85280.00 IOPS, 333.12 MiB/s 00:42:32.258 Latency(us) 00:42:32.258 [2024-12-09T09:54:39.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.258 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:42:32.258 Nvme1n1 : 1.00 84962.59 331.89 0.00 0.00 1496.30 711.21 3738.53 00:42:32.258 [2024-12-09T09:54:39.760Z] =================================================================================================================== 00:42:32.258 [2024-12-09T09:54:39.760Z] Total : 84962.59 331.89 0.00 0.00 1496.30 711.21 3738.53 00:42:32.258 7956.00 IOPS, 31.08 MiB/s 00:42:32.258 Latency(us) 00:42:32.258 [2024-12-09T09:54:39.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.258 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:42:32.258 Nvme1n1 : 1.01 8034.49 31.38 0.00 0.00 15862.93 6047.19 19899.11 00:42:32.258 [2024-12-09T09:54:39.760Z] =================================================================================================================== 00:42:32.258 [2024-12-09T09:54:39.760Z] Total : 8034.49 31.38 0.00 0.00 15862.93 6047.19 19899.11 00:42:32.258 5500.00 IOPS, 21.48 MiB/s 00:42:32.258 Latency(us) 00:42:32.258 [2024-12-09T09:54:39.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.258 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:42:32.258 Nvme1n1 : 1.02 5542.47 21.65 0.00 0.00 22898.94 10724.07 33125.47 00:42:32.258 [2024-12-09T09:54:39.760Z] =================================================================================================================== 00:42:32.258 [2024-12-09T09:54:39.760Z] Total : 5542.47 21.65 0.00 0.00 22898.94 10724.07 33125.47 00:42:32.258 8405.00 IOPS, 32.83 MiB/s 00:42:32.258 Latency(us) 00:42:32.258 [2024-12-09T09:54:39.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.258 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:42:32.258 Nvme1n1 : 1.01 8470.38 33.09 0.00 0.00 15029.60 6196.13 20614.05 00:42:32.258 [2024-12-09T09:54:39.760Z] =================================================================================================================== 00:42:32.258 [2024-12-09T09:54:39.760Z] Total : 8470.38 33.09 0.00 0.00 15029.60 6196.13 20614.05 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64023 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64025 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64026 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:32.516 09:54:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:32.516 rmmod nvme_tcp 00:42:32.516 rmmod nvme_fabrics 00:42:32.516 rmmod nvme_keyring 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63995 ']' 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63995 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63995 ']' 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63995 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63995 00:42:32.775 killing process with pid 63995 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63995' 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63995 00:42:32.775 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63995 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:42:33.033 00:42:33.033 real 0m3.547s 00:42:33.033 user 0m13.553s 00:42:33.033 sys 0m2.331s 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:33.033 ************************************ 00:42:33.033 END TEST nvmf_bdev_io_wait 00:42:33.033 ************************************ 00:42:33.033 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:33.292 09:54:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:42:33.292 09:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:33.292 09:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:33.292 09:54:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:33.292 ************************************ 00:42:33.293 START TEST nvmf_queue_depth 00:42:33.293 ************************************ 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:42:33.293 * Looking for test storage... 00:42:33.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:33.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.293 --rc genhtml_branch_coverage=1 00:42:33.293 --rc genhtml_function_coverage=1 00:42:33.293 --rc genhtml_legend=1 00:42:33.293 --rc geninfo_all_blocks=1 00:42:33.293 --rc geninfo_unexecuted_blocks=1 00:42:33.293 00:42:33.293 ' 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:33.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.293 --rc genhtml_branch_coverage=1 00:42:33.293 --rc genhtml_function_coverage=1 00:42:33.293 --rc genhtml_legend=1 00:42:33.293 --rc geninfo_all_blocks=1 00:42:33.293 --rc geninfo_unexecuted_blocks=1 00:42:33.293 00:42:33.293 ' 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:33.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.293 --rc genhtml_branch_coverage=1 00:42:33.293 --rc genhtml_function_coverage=1 00:42:33.293 --rc genhtml_legend=1 00:42:33.293 --rc geninfo_all_blocks=1 00:42:33.293 --rc geninfo_unexecuted_blocks=1 00:42:33.293 00:42:33.293 ' 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:33.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.293 --rc genhtml_branch_coverage=1 00:42:33.293 --rc genhtml_function_coverage=1 00:42:33.293 --rc genhtml_legend=1 00:42:33.293 --rc geninfo_all_blocks=1 00:42:33.293 --rc geninfo_unexecuted_blocks=1 00:42:33.293 00:42:33.293 ' 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:33.293 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:33.294 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:33.294 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:33.553 Cannot find device "nvmf_init_br" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:33.553 Cannot find device "nvmf_init_br2" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:33.553 Cannot find device "nvmf_tgt_br" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:33.553 Cannot find device "nvmf_tgt_br2" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:33.553 Cannot find device "nvmf_init_br" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:33.553 Cannot find device "nvmf_init_br2" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:33.553 Cannot find device "nvmf_tgt_br" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:33.553 Cannot find device "nvmf_tgt_br2" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:33.553 Cannot find device "nvmf_br" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:33.553 Cannot find device "nvmf_init_if" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:33.553 Cannot find device "nvmf_init_if2" 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:33.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:33.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:33.553 09:54:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:33.553 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:33.553 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:33.553 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:33.553 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:33.553 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:33.553 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:33.553 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:33.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:33.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:42:33.812 00:42:33.812 --- 10.0.0.3 ping statistics --- 00:42:33.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:33.812 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:33.812 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:33.812 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:42:33.812 00:42:33.812 --- 10.0.0.4 ping statistics --- 00:42:33.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:33.812 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:33.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:33.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:42:33.812 00:42:33.812 --- 10.0.0.1 ping statistics --- 00:42:33.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:33.812 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:33.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:33.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:42:33.812 00:42:33.812 --- 10.0.0.2 ping statistics --- 00:42:33.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:33.812 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:33.812 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64284 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64284 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64284 ']' 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:33.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:33.813 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:33.813 [2024-12-09 09:54:41.288383] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:33.813 [2024-12-09 09:54:41.288504] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:34.071 [2024-12-09 09:54:41.452486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:34.071 [2024-12-09 09:54:41.486365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:34.071 [2024-12-09 09:54:41.486426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:34.071 [2024-12-09 09:54:41.486438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:34.071 [2024-12-09 09:54:41.486446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:34.071 [2024-12-09 09:54:41.486454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:34.071 [2024-12-09 09:54:41.486767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:34.071 [2024-12-09 09:54:41.518765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:34.071 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:34.329 [2024-12-09 09:54:41.607817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:34.329 Malloc0 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:34.329 [2024-12-09 09:54:41.647132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:34.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64314 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64314 /var/tmp/bdevperf.sock 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64314 ']' 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:34.329 09:54:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:34.329 [2024-12-09 09:54:41.708298] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:34.329 [2024-12-09 09:54:41.708417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64314 ] 00:42:34.588 [2024-12-09 09:54:41.879233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:34.588 [2024-12-09 09:54:41.927805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:34.588 [2024-12-09 09:54:41.964618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:35.521 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:35.521 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:42:35.521 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:35.521 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.521 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:35.521 NVMe0n1 00:42:35.521 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.521 09:54:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:35.779 Running I/O for 10 seconds... 00:42:37.648 6203.00 IOPS, 24.23 MiB/s [2024-12-09T09:54:46.524Z] 6664.50 IOPS, 26.03 MiB/s [2024-12-09T09:54:47.088Z] 6874.67 IOPS, 26.85 MiB/s [2024-12-09T09:54:48.468Z] 7172.25 IOPS, 28.02 MiB/s [2024-12-09T09:54:49.402Z] 7224.80 IOPS, 28.22 MiB/s [2024-12-09T09:54:50.369Z] 7314.50 IOPS, 28.57 MiB/s [2024-12-09T09:54:51.304Z] 7331.14 IOPS, 28.64 MiB/s [2024-12-09T09:54:52.239Z] 7362.50 IOPS, 28.76 MiB/s [2024-12-09T09:54:53.175Z] 7410.00 IOPS, 28.95 MiB/s [2024-12-09T09:54:53.433Z] 7455.00 IOPS, 29.12 MiB/s 00:42:45.931 Latency(us) 00:42:45.931 [2024-12-09T09:54:53.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:45.931 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:42:45.931 Verification LBA range: start 0x0 length 0x4000 00:42:45.931 NVMe0n1 : 10.09 7483.51 29.23 0.00 0.00 136051.00 27882.59 100091.35 00:42:45.931 [2024-12-09T09:54:53.433Z] =================================================================================================================== 00:42:45.932 [2024-12-09T09:54:53.434Z] Total : 7483.51 29.23 0.00 0.00 136051.00 27882.59 100091.35 00:42:45.932 { 00:42:45.932 "results": [ 00:42:45.932 { 00:42:45.932 "job": "NVMe0n1", 00:42:45.932 "core_mask": "0x1", 00:42:45.932 "workload": "verify", 00:42:45.932 "status": "finished", 00:42:45.932 "verify_range": { 00:42:45.932 "start": 0, 00:42:45.932 "length": 16384 00:42:45.932 }, 00:42:45.932 "queue_depth": 1024, 00:42:45.932 "io_size": 4096, 00:42:45.932 "runtime": 10.093397, 00:42:45.932 "iops": 7483.506296244961, 00:42:45.932 "mibps": 29.23244646970688, 00:42:45.932 "io_failed": 0, 00:42:45.932 "io_timeout": 0, 00:42:45.932 "avg_latency_us": 136050.99804668338, 00:42:45.932 "min_latency_us": 27882.589090909092, 00:42:45.932 "max_latency_us": 100091.34545454546 00:42:45.932 } 00:42:45.932 ], 00:42:45.932 "core_count": 1 00:42:45.932 } 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64314 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64314 ']' 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64314 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64314 00:42:45.932 killing process with pid 64314 00:42:45.932 Received shutdown signal, test time was about 10.000000 seconds 00:42:45.932 00:42:45.932 Latency(us) 00:42:45.932 [2024-12-09T09:54:53.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:45.932 [2024-12-09T09:54:53.434Z] =================================================================================================================== 00:42:45.932 [2024-12-09T09:54:53.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64314' 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64314 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64314 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:45.932 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:46.191 rmmod nvme_tcp 00:42:46.191 rmmod nvme_fabrics 00:42:46.191 rmmod nvme_keyring 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64284 ']' 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64284 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64284 ']' 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64284 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64284 00:42:46.191 killing process with pid 64284 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64284' 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64284 00:42:46.191 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64284 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:46.449 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:46.450 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:46.450 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:46.708 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:46.708 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:46.709 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:46.709 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:46.709 09:54:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:42:46.709 00:42:46.709 real 0m13.441s 00:42:46.709 user 0m23.443s 00:42:46.709 sys 0m2.221s 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:46.709 ************************************ 00:42:46.709 END TEST nvmf_queue_depth 00:42:46.709 ************************************ 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:42:46.709 ************************************ 00:42:46.709 START TEST nvmf_target_multipath 00:42:46.709 ************************************ 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:42:46.709 * Looking for test storage... 00:42:46.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:42:46.709 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:46.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.968 --rc genhtml_branch_coverage=1 00:42:46.968 --rc genhtml_function_coverage=1 00:42:46.968 --rc genhtml_legend=1 00:42:46.968 --rc geninfo_all_blocks=1 00:42:46.968 --rc geninfo_unexecuted_blocks=1 00:42:46.968 00:42:46.968 ' 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:46.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.968 --rc genhtml_branch_coverage=1 00:42:46.968 --rc genhtml_function_coverage=1 00:42:46.968 --rc genhtml_legend=1 00:42:46.968 --rc geninfo_all_blocks=1 00:42:46.968 --rc geninfo_unexecuted_blocks=1 00:42:46.968 00:42:46.968 ' 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:46.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.968 --rc genhtml_branch_coverage=1 00:42:46.968 --rc genhtml_function_coverage=1 00:42:46.968 --rc genhtml_legend=1 00:42:46.968 --rc geninfo_all_blocks=1 00:42:46.968 --rc geninfo_unexecuted_blocks=1 00:42:46.968 00:42:46.968 ' 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:46.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.968 --rc genhtml_branch_coverage=1 00:42:46.968 --rc genhtml_function_coverage=1 00:42:46.968 --rc genhtml_legend=1 00:42:46.968 --rc geninfo_all_blocks=1 00:42:46.968 --rc geninfo_unexecuted_blocks=1 00:42:46.968 00:42:46.968 ' 00:42:46.968 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:46.969 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:46.969 Cannot find device "nvmf_init_br" 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:46.969 Cannot find device "nvmf_init_br2" 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:46.969 Cannot find device "nvmf_tgt_br" 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:46.969 Cannot find device "nvmf_tgt_br2" 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:46.969 Cannot find device "nvmf_init_br" 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:46.969 Cannot find device "nvmf_init_br2" 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:46.969 Cannot find device "nvmf_tgt_br" 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:42:46.969 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:46.970 Cannot find device "nvmf_tgt_br2" 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:46.970 Cannot find device "nvmf_br" 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:46.970 Cannot find device "nvmf_init_if" 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:46.970 Cannot find device "nvmf_init_if2" 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:46.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:46.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:46.970 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:47.229 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:47.230 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:47.230 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:42:47.230 00:42:47.230 --- 10.0.0.3 ping statistics --- 00:42:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:47.230 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:47.230 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:47.230 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:42:47.230 00:42:47.230 --- 10.0.0.4 ping statistics --- 00:42:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:47.230 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:47.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:47.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:42:47.230 00:42:47.230 --- 10.0.0.1 ping statistics --- 00:42:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:47.230 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:47.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:47.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:42:47.230 00:42:47.230 --- 10.0.0.2 ping statistics --- 00:42:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:47.230 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64684 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64684 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64684 ']' 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:47.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:47.230 09:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:47.488 [2024-12-09 09:54:54.768384] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:42:47.489 [2024-12-09 09:54:54.768466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:47.489 [2024-12-09 09:54:54.928983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:47.489 [2024-12-09 09:54:54.968885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:47.489 [2024-12-09 09:54:54.968970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:47.489 [2024-12-09 09:54:54.968988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:47.489 [2024-12-09 09:54:54.969000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:47.489 [2024-12-09 09:54:54.969009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:47.489 [2024-12-09 09:54:54.969888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:47.489 [2024-12-09 09:54:54.969939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:47.489 [2024-12-09 09:54:54.970010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:47.489 [2024-12-09 09:54:54.970013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.747 [2024-12-09 09:54:55.003417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:47.747 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:47.747 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:42:47.747 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:47.747 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:47.747 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:47.747 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:47.747 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:48.004 [2024-12-09 09:54:55.450908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:48.005 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:42:48.571 Malloc0 00:42:48.571 09:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:42:48.571 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:49.137 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:49.396 [2024-12-09 09:54:56.643992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:49.396 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:42:49.655 [2024-12-09 09:54:56.940256] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:42:49.655 09:54:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid=eacce601-ab17-4447-87df-663b0f4f5455 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:42:49.655 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid=eacce601-ab17-4447-87df-663b0f4f5455 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:42:49.913 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:42:49.913 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:42:49.913 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:49.913 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:49.913 09:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64772 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:42:51.813 09:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:42:51.813 [global] 00:42:51.813 thread=1 00:42:51.813 invalidate=1 00:42:51.813 rw=randrw 00:42:51.813 time_based=1 00:42:51.813 runtime=6 00:42:51.813 ioengine=libaio 00:42:51.813 direct=1 00:42:51.813 bs=4096 00:42:51.813 iodepth=128 00:42:51.813 norandommap=0 00:42:51.813 numjobs=1 00:42:51.813 00:42:51.813 verify_dump=1 00:42:51.813 verify_backlog=512 00:42:51.813 verify_state_save=0 00:42:51.813 do_verify=1 00:42:51.813 verify=crc32c-intel 00:42:51.813 [job0] 00:42:51.813 filename=/dev/nvme0n1 00:42:51.813 Could not set queue depth (nvme0n1) 00:42:52.072 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:52.072 fio-3.35 00:42:52.072 Starting 1 thread 00:42:53.011 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:42:53.273 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:42:53.531 09:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:42:53.790 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:42:54.048 09:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64772 00:42:58.233 00:42:58.233 job0: (groupid=0, jobs=1): err= 0: pid=64793: Mon Dec 9 09:55:05 2024 00:42:58.233 read: IOPS=9982, BW=39.0MiB/s (40.9MB/s)(234MiB/6001msec) 00:42:58.233 slat (usec): min=2, max=6000, avg=58.24, stdev=232.54 00:42:58.233 clat (usec): min=1415, max=17610, avg=8732.89, stdev=1576.43 00:42:58.233 lat (usec): min=1437, max=17617, avg=8791.13, stdev=1581.16 00:42:58.233 clat percentiles (usec): 00:42:58.233 | 1.00th=[ 4490], 5.00th=[ 6652], 10.00th=[ 7439], 20.00th=[ 7898], 00:42:58.233 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8717], 00:42:58.233 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[10290], 95.00th=[12387], 00:42:58.233 | 99.00th=[13698], 99.50th=[14091], 99.90th=[15008], 99.95th=[15533], 00:42:58.233 | 99.99th=[17171] 00:42:58.233 bw ( KiB/s): min= 6752, max=26080, per=51.15%, avg=20424.73, stdev=6614.95, samples=11 00:42:58.233 iops : min= 1688, max= 6520, avg=5106.18, stdev=1653.74, samples=11 00:42:58.233 write: IOPS=5843, BW=22.8MiB/s (23.9MB/s)(121MiB/5306msec); 0 zone resets 00:42:58.233 slat (usec): min=4, max=7280, avg=69.43, stdev=175.69 00:42:58.233 clat (usec): min=900, max=16042, avg=7663.86, stdev=1462.99 00:42:58.233 lat (usec): min=924, max=16892, avg=7733.29, stdev=1468.39 00:42:58.233 clat percentiles (usec): 00:42:58.233 | 1.00th=[ 3458], 5.00th=[ 4490], 10.00th=[ 5932], 20.00th=[ 7046], 00:42:58.233 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:42:58.233 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 9503], 00:42:58.233 | 99.00th=[12125], 99.50th=[12911], 99.90th=[14746], 99.95th=[15008], 00:42:58.233 | 99.99th=[15926] 00:42:58.233 bw ( KiB/s): min= 6808, max=25728, per=87.73%, avg=20504.00, stdev=6546.34, samples=11 00:42:58.233 iops : min= 1702, max= 6432, avg=5126.00, stdev=1636.58, samples=11 00:42:58.233 lat (usec) : 1000=0.01% 00:42:58.233 lat (msec) : 2=0.03%, 4=1.26%, 10=89.92%, 20=8.79% 00:42:58.234 cpu : usr=5.63%, sys=22.88%, ctx=5248, majf=0, minf=90 00:42:58.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:42:58.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:58.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:58.234 issued rwts: total=59904,31003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:58.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:58.234 00:42:58.234 Run status group 0 (all jobs): 00:42:58.234 READ: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=234MiB (245MB), run=6001-6001msec 00:42:58.234 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=121MiB (127MB), run=5306-5306msec 00:42:58.234 00:42:58.234 Disk stats (read/write): 00:42:58.234 nvme0n1: ios=59092/30482, merge=0/0, ticks=492683/217458, in_queue=710141, util=98.63% 00:42:58.234 09:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:42:58.492 09:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64880 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:42:59.058 09:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:42:59.058 [global] 00:42:59.058 thread=1 00:42:59.058 invalidate=1 00:42:59.058 rw=randrw 00:42:59.058 time_based=1 00:42:59.058 runtime=6 00:42:59.058 ioengine=libaio 00:42:59.058 direct=1 00:42:59.058 bs=4096 00:42:59.058 iodepth=128 00:42:59.058 norandommap=0 00:42:59.058 numjobs=1 00:42:59.058 00:42:59.058 verify_dump=1 00:42:59.058 verify_backlog=512 00:42:59.058 verify_state_save=0 00:42:59.058 do_verify=1 00:42:59.058 verify=crc32c-intel 00:42:59.059 [job0] 00:42:59.059 filename=/dev/nvme0n1 00:42:59.059 Could not set queue depth (nvme0n1) 00:42:59.059 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:59.059 fio-3.35 00:42:59.059 Starting 1 thread 00:42:59.992 09:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:43:00.559 09:55:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:43:00.817 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:43:00.817 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:43:00.817 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:43:00.817 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:43:00.817 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:43:00.817 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:43:00.817 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:43:00.817 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:43:00.818 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:43:00.818 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:43:00.818 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:43:00.818 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:43:00.818 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:43:01.075 09:55:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:43:01.650 09:55:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64880 00:43:05.878 00:43:05.878 job0: (groupid=0, jobs=1): err= 0: pid=64901: Mon Dec 9 09:55:12 2024 00:43:05.878 read: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(263MiB/6004msec) 00:43:05.878 slat (usec): min=3, max=10625, avg=43.21, stdev=189.98 00:43:05.878 clat (usec): min=290, max=19704, avg=7835.31, stdev=2588.81 00:43:05.878 lat (usec): min=301, max=19713, avg=7878.52, stdev=2601.73 00:43:05.878 clat percentiles (usec): 00:43:05.878 | 1.00th=[ 1188], 5.00th=[ 2638], 10.00th=[ 4146], 20.00th=[ 5866], 00:43:05.878 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8455], 00:43:05.878 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10552], 95.00th=[12125], 00:43:05.878 | 99.00th=[13960], 99.50th=[15533], 99.90th=[17695], 99.95th=[18220], 00:43:05.878 | 99.99th=[19006] 00:43:05.878 bw ( KiB/s): min= 3792, max=40192, per=52.99%, avg=23787.82, stdev=9795.40, samples=11 00:43:05.878 iops : min= 948, max=10048, avg=5946.91, stdev=2448.85, samples=11 00:43:05.878 write: IOPS=6759, BW=26.4MiB/s (27.7MB/s)(140MiB/5306msec); 0 zone resets 00:43:05.878 slat (usec): min=12, max=2232, avg=53.83, stdev=120.17 00:43:05.878 clat (usec): min=244, max=18595, avg=6522.41, stdev=2352.46 00:43:05.878 lat (usec): min=265, max=18638, avg=6576.24, stdev=2365.90 00:43:05.878 clat percentiles (usec): 00:43:05.878 | 1.00th=[ 1123], 5.00th=[ 2311], 10.00th=[ 3359], 20.00th=[ 4293], 00:43:05.878 | 30.00th=[ 5014], 40.00th=[ 6390], 50.00th=[ 7177], 60.00th=[ 7570], 00:43:05.879 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8979], 95.00th=[10028], 00:43:05.879 | 99.00th=[11731], 99.50th=[12649], 99.90th=[15795], 99.95th=[16319], 00:43:05.879 | 99.99th=[18482] 00:43:05.879 bw ( KiB/s): min= 4056, max=39528, per=88.06%, avg=23809.55, stdev=9657.11, samples=11 00:43:05.879 iops : min= 1014, max= 9882, avg=5952.36, stdev=2414.28, samples=11 00:43:05.879 lat (usec) : 250=0.01%, 500=0.08%, 750=0.19%, 1000=0.37% 00:43:05.879 lat (msec) : 2=2.80%, 4=8.34%, 10=78.12%, 20=10.10% 00:43:05.879 cpu : usr=6.38%, sys=26.19%, ctx=6606, majf=0, minf=108 00:43:05.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:43:05.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:05.879 issued rwts: total=67376,35865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:05.879 00:43:05.879 Run status group 0 (all jobs): 00:43:05.879 READ: bw=43.8MiB/s (46.0MB/s), 43.8MiB/s-43.8MiB/s (46.0MB/s-46.0MB/s), io=263MiB (276MB), run=6004-6004msec 00:43:05.879 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=140MiB (147MB), run=5306-5306msec 00:43:05.879 00:43:05.879 Disk stats (read/write): 00:43:05.879 nvme0n1: ios=66718/35005, merge=0/0, ticks=497358/210091, in_queue=707449, util=98.60% 00:43:05.879 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:05.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:43:05.879 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:05.879 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:43:05.879 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:05.879 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:05.879 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:05.880 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:05.880 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:43:05.880 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:05.880 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:43:05.880 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:43:05.880 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:43:05.880 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:43:05.880 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:05.880 09:55:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:05.880 rmmod nvme_tcp 00:43:05.880 rmmod nvme_fabrics 00:43:05.880 rmmod nvme_keyring 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64684 ']' 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64684 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64684 ']' 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64684 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64684 00:43:05.880 killing process with pid 64684 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64684' 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64684 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64684 00:43:05.880 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:05.881 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:43:06.140 00:43:06.140 real 0m19.518s 00:43:06.140 user 1m13.050s 00:43:06.140 sys 0m10.016s 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:06.140 ************************************ 00:43:06.140 END TEST nvmf_target_multipath 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:06.140 ************************************ 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:06.140 ************************************ 00:43:06.140 START TEST nvmf_zcopy 00:43:06.140 ************************************ 00:43:06.140 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:43:06.399 * Looking for test storage... 00:43:06.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:06.399 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:06.399 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:06.399 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:06.400 --rc genhtml_branch_coverage=1 00:43:06.400 --rc genhtml_function_coverage=1 00:43:06.400 --rc genhtml_legend=1 00:43:06.400 --rc geninfo_all_blocks=1 00:43:06.400 --rc geninfo_unexecuted_blocks=1 00:43:06.400 00:43:06.400 ' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:06.400 --rc genhtml_branch_coverage=1 00:43:06.400 --rc genhtml_function_coverage=1 00:43:06.400 --rc genhtml_legend=1 00:43:06.400 --rc geninfo_all_blocks=1 00:43:06.400 --rc geninfo_unexecuted_blocks=1 00:43:06.400 00:43:06.400 ' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:06.400 --rc genhtml_branch_coverage=1 00:43:06.400 --rc genhtml_function_coverage=1 00:43:06.400 --rc genhtml_legend=1 00:43:06.400 --rc geninfo_all_blocks=1 00:43:06.400 --rc geninfo_unexecuted_blocks=1 00:43:06.400 00:43:06.400 ' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:06.400 --rc genhtml_branch_coverage=1 00:43:06.400 --rc genhtml_function_coverage=1 00:43:06.400 --rc genhtml_legend=1 00:43:06.400 --rc geninfo_all_blocks=1 00:43:06.400 --rc geninfo_unexecuted_blocks=1 00:43:06.400 00:43:06.400 ' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:06.400 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:06.400 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:06.401 Cannot find device "nvmf_init_br" 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:06.401 Cannot find device "nvmf_init_br2" 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:06.401 Cannot find device "nvmf_tgt_br" 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:43:06.401 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:06.659 Cannot find device "nvmf_tgt_br2" 00:43:06.659 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:43:06.659 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:06.659 Cannot find device "nvmf_init_br" 00:43:06.659 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:43:06.659 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:06.659 Cannot find device "nvmf_init_br2" 00:43:06.659 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:43:06.659 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:06.659 Cannot find device "nvmf_tgt_br" 00:43:06.659 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:43:06.659 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:06.660 Cannot find device "nvmf_tgt_br2" 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:06.660 Cannot find device "nvmf_br" 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:06.660 Cannot find device "nvmf_init_if" 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:06.660 Cannot find device "nvmf_init_if2" 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:06.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:06.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:06.660 09:55:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:06.660 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:06.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:06.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:43:06.919 00:43:06.919 --- 10.0.0.3 ping statistics --- 00:43:06.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:06.919 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:06.919 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:06.919 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:43:06.919 00:43:06.919 --- 10.0.0.4 ping statistics --- 00:43:06.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:06.919 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:06.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:06.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:43:06.919 00:43:06.919 --- 10.0.0.1 ping statistics --- 00:43:06.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:06.919 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:06.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:06.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:43:06.919 00:43:06.919 --- 10.0.0.2 ping statistics --- 00:43:06.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:06.919 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65207 00:43:06.919 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65207 00:43:06.920 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:43:06.920 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65207 ']' 00:43:06.920 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:06.920 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:06.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:06.920 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:06.920 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:06.920 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:06.920 [2024-12-09 09:55:14.341918] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:43:06.920 [2024-12-09 09:55:14.342030] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:07.178 [2024-12-09 09:55:14.504372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.178 [2024-12-09 09:55:14.547045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:07.178 [2024-12-09 09:55:14.547135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:07.178 [2024-12-09 09:55:14.547157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:07.178 [2024-12-09 09:55:14.547172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:07.179 [2024-12-09 09:55:14.547186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:07.179 [2024-12-09 09:55:14.547666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:07.179 [2024-12-09 09:55:14.581907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.179 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:07.437 [2024-12-09 09:55:14.681122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:07.437 [2024-12-09 09:55:14.697283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:07.437 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:07.438 malloc0 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:07.438 { 00:43:07.438 "params": { 00:43:07.438 "name": "Nvme$subsystem", 00:43:07.438 "trtype": "$TEST_TRANSPORT", 00:43:07.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.438 "adrfam": "ipv4", 00:43:07.438 "trsvcid": "$NVMF_PORT", 00:43:07.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.438 "hdgst": ${hdgst:-false}, 00:43:07.438 "ddgst": ${ddgst:-false} 00:43:07.438 }, 00:43:07.438 "method": "bdev_nvme_attach_controller" 00:43:07.438 } 00:43:07.438 EOF 00:43:07.438 )") 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:43:07.438 09:55:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:07.438 "params": { 00:43:07.438 "name": "Nvme1", 00:43:07.438 "trtype": "tcp", 00:43:07.438 "traddr": "10.0.0.3", 00:43:07.438 "adrfam": "ipv4", 00:43:07.438 "trsvcid": "4420", 00:43:07.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:07.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:07.438 "hdgst": false, 00:43:07.438 "ddgst": false 00:43:07.438 }, 00:43:07.438 "method": "bdev_nvme_attach_controller" 00:43:07.438 }' 00:43:07.438 [2024-12-09 09:55:14.797842] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:43:07.438 [2024-12-09 09:55:14.797985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65227 ] 00:43:07.697 [2024-12-09 09:55:14.956045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.697 [2024-12-09 09:55:14.995131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:07.697 [2024-12-09 09:55:15.037510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:07.697 Running I/O for 10 seconds... 00:43:10.009 5643.00 IOPS, 44.09 MiB/s [2024-12-09T09:55:18.472Z] 5504.00 IOPS, 43.00 MiB/s [2024-12-09T09:55:19.409Z] 5566.67 IOPS, 43.49 MiB/s [2024-12-09T09:55:20.346Z] 5595.00 IOPS, 43.71 MiB/s [2024-12-09T09:55:21.282Z] 5597.40 IOPS, 43.73 MiB/s [2024-12-09T09:55:22.219Z] 5620.17 IOPS, 43.91 MiB/s [2024-12-09T09:55:23.200Z] 5615.29 IOPS, 43.87 MiB/s [2024-12-09T09:55:24.575Z] 5631.75 IOPS, 44.00 MiB/s [2024-12-09T09:55:25.511Z] 5643.89 IOPS, 44.09 MiB/s [2024-12-09T09:55:25.511Z] 5649.30 IOPS, 44.14 MiB/s 00:43:18.009 Latency(us) 00:43:18.009 [2024-12-09T09:55:25.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:18.009 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:43:18.009 Verification LBA range: start 0x0 length 0x1000 00:43:18.009 Nvme1n1 : 10.01 5652.66 44.16 0.00 0.00 22573.39 1377.75 31695.59 00:43:18.009 [2024-12-09T09:55:25.511Z] =================================================================================================================== 00:43:18.009 [2024-12-09T09:55:25.511Z] Total : 5652.66 44.16 0.00 0.00 22573.39 1377.75 31695.59 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65350 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:18.009 { 00:43:18.009 "params": { 00:43:18.009 "name": "Nvme$subsystem", 00:43:18.009 "trtype": "$TEST_TRANSPORT", 00:43:18.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:18.009 "adrfam": "ipv4", 00:43:18.009 "trsvcid": "$NVMF_PORT", 00:43:18.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:18.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:18.009 "hdgst": ${hdgst:-false}, 00:43:18.009 "ddgst": ${ddgst:-false} 00:43:18.009 }, 00:43:18.009 "method": "bdev_nvme_attach_controller" 00:43:18.009 } 00:43:18.009 EOF 00:43:18.009 )") 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:43:18.009 [2024-12-09 09:55:25.361036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.361082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:43:18.009 09:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:18.009 "params": { 00:43:18.009 "name": "Nvme1", 00:43:18.009 "trtype": "tcp", 00:43:18.009 "traddr": "10.0.0.3", 00:43:18.009 "adrfam": "ipv4", 00:43:18.009 "trsvcid": "4420", 00:43:18.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:18.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:18.009 "hdgst": false, 00:43:18.009 "ddgst": false 00:43:18.009 }, 00:43:18.009 "method": "bdev_nvme_attach_controller" 00:43:18.009 }' 00:43:18.009 [2024-12-09 09:55:25.373021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.373058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.381024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.381065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.393042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.393089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.405043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.405103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.417047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.417091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.420611] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:43:18.009 [2024-12-09 09:55:25.420729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65350 ] 00:43:18.009 [2024-12-09 09:55:25.429039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.429083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.441010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.441042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.453014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.453045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.461009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.461038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.473021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.473053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.485021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.485055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.497033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.497066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.009 [2024-12-09 09:55:25.509031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.009 [2024-12-09 09:55:25.509064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.267 [2024-12-09 09:55:25.521028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.267 [2024-12-09 09:55:25.521057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.267 [2024-12-09 09:55:25.533033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.267 [2024-12-09 09:55:25.533062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.267 [2024-12-09 09:55:25.545034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.267 [2024-12-09 09:55:25.545063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.267 [2024-12-09 09:55:25.557063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.267 [2024-12-09 09:55:25.557102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.267 [2024-12-09 09:55:25.569054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.267 [2024-12-09 09:55:25.569088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.267 [2024-12-09 09:55:25.574877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:18.267 [2024-12-09 09:55:25.577050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.267 [2024-12-09 09:55:25.577080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.589088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.589133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.601096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.601138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.608921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:18.268 [2024-12-09 09:55:25.613063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.613093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.621070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.621102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.629088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.629125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.641096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.641140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.649375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:18.268 [2024-12-09 09:55:25.653093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.653128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.665097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.665135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.677108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.677149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.689105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.689139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.701116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.701149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.713130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.713164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.721131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.721165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.733139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.733169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 [2024-12-09 09:55:25.745165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.745202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.268 Running I/O for 5 seconds... 00:43:18.268 [2024-12-09 09:55:25.757181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.268 [2024-12-09 09:55:25.757217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.774156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.774193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.790455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.790493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.808598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.808637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.824011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.824051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.840594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.840637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.851077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.851130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.865926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.865984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.880709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.880753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.897454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.897494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.912225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.912275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.928918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.928971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.945617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.945655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.955379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.955415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.970481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.970518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:25.987809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:25.987845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:26.004459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:26.004496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.526 [2024-12-09 09:55:26.020058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.526 [2024-12-09 09:55:26.020100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.036612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.036654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.052809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.052863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.070362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.070415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.080584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.080628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.092909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.092978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.104610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.104654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.116230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.116271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.133032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.133086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.143479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.143519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.156150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.156188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.172082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.172123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.188762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.188807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.205361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.205410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.223403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.223631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.237861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.237909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.253741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.253788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.263782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.263826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:18.784 [2024-12-09 09:55:26.276888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:18.784 [2024-12-09 09:55:26.276930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.292236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.292274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.308747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.308786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.325281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.325318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.341686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.341724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.357844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.357885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.374568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.374620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.390987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.391043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.407706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.407764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.424291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.424345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.441695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.441924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.456574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.456781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.474332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.474555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.489452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.489678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.505360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.505598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.523074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.523303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.043 [2024-12-09 09:55:26.538541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.043 [2024-12-09 09:55:26.538790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.555015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.555248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.572500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.572741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.588671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.588912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.605463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.605713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.622016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.622267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.637815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.638066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.647678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.647903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.662932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.663185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.680240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.680456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.697107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.697157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.713329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.713380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.730698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.730750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.746344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.746395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 11168.00 IOPS, 87.25 MiB/s [2024-12-09T09:55:26.804Z] [2024-12-09 09:55:26.755698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.755871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.772612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.772861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.302 [2024-12-09 09:55:26.789417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.302 [2024-12-09 09:55:26.789634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.806332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.806583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.824247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.824471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.839279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.839523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.857359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.857620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.872680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.872901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.882560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.882785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.898478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.898707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.915050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.915270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.933406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.933632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.948924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.949194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.959221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.959470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.974345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.974568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:26.985389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:26.985436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:27.000101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:27.000325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:27.018335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:27.018390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:27.033199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:27.033260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:27.048519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:27.048575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.560 [2024-12-09 09:55:27.057742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.560 [2024-12-09 09:55:27.057787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.819 [2024-12-09 09:55:27.073978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.819 [2024-12-09 09:55:27.074033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.819 [2024-12-09 09:55:27.083995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.819 [2024-12-09 09:55:27.084043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.819 [2024-12-09 09:55:27.099346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.819 [2024-12-09 09:55:27.099550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.819 [2024-12-09 09:55:27.115561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.819 [2024-12-09 09:55:27.115618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.819 [2024-12-09 09:55:27.133843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.819 [2024-12-09 09:55:27.133901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.819 [2024-12-09 09:55:27.149026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.819 [2024-12-09 09:55:27.149086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.819 [2024-12-09 09:55:27.167327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.819 [2024-12-09 09:55:27.167388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.819 [2024-12-09 09:55:27.182444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.820 [2024-12-09 09:55:27.182676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.820 [2024-12-09 09:55:27.192323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.820 [2024-12-09 09:55:27.192371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.820 [2024-12-09 09:55:27.208261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.820 [2024-12-09 09:55:27.208315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.820 [2024-12-09 09:55:27.224613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.820 [2024-12-09 09:55:27.224667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.820 [2024-12-09 09:55:27.241396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.820 [2024-12-09 09:55:27.241658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.820 [2024-12-09 09:55:27.257366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.820 [2024-12-09 09:55:27.257421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.820 [2024-12-09 09:55:27.274692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.820 [2024-12-09 09:55:27.274752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.820 [2024-12-09 09:55:27.290598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.820 [2024-12-09 09:55:27.290656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:19.820 [2024-12-09 09:55:27.306944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:19.820 [2024-12-09 09:55:27.307013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.078 [2024-12-09 09:55:27.323919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.078 [2024-12-09 09:55:27.323989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.078 [2024-12-09 09:55:27.340884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.078 [2024-12-09 09:55:27.340942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.078 [2024-12-09 09:55:27.356723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.078 [2024-12-09 09:55:27.356774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.078 [2024-12-09 09:55:27.373872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.078 [2024-12-09 09:55:27.373934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.078 [2024-12-09 09:55:27.389361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.078 [2024-12-09 09:55:27.389407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.078 [2024-12-09 09:55:27.406184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.078 [2024-12-09 09:55:27.406245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.078 [2024-12-09 09:55:27.423194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.078 [2024-12-09 09:55:27.423424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.078 [2024-12-09 09:55:27.433747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.078 [2024-12-09 09:55:27.433790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.445463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.445508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.456437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.456479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.469706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.469752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.479993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.480034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.491164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.491208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.506408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.506623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.522236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.522283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.531732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.531775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.547214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.547260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.079 [2024-12-09 09:55:27.563321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.079 [2024-12-09 09:55:27.563368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.583319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.583419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.602186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.602244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.616843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.616901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.637540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.637841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.659647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.659731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.674999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.675058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.691852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.691939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.707766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.707819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.726195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.726252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.741003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.741061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 10906.00 IOPS, 85.20 MiB/s [2024-12-09T09:55:27.840Z] [2024-12-09 09:55:27.755663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.755713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.769758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.769812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.783766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.783818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.797542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.797590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.814108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.814156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.338 [2024-12-09 09:55:27.830890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.338 [2024-12-09 09:55:27.830939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.848408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.848596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.865181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.865227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.880545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.880599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.895382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.895592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.911295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.911344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.927500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.927547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.943661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.943707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.954970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.955013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.969726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.969767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:27.985158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:27.985206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:28.001352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:28.001400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:28.011736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:28.011776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:28.023918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:28.023975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:28.039010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:28.039050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:28.055599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:28.055644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:28.065805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:28.065846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:28.080828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:28.080871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.597 [2024-12-09 09:55:28.091805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.597 [2024-12-09 09:55:28.091848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.106878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.107087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.122951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.123008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.132672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.132711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.144870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.144910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.160060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.160099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.169881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.169919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.181553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.181591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.192545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.192584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.205057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.205100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.221024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.221064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.231354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.231392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.246029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.246067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.262445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.262484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.272213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.272250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.287275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.287313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.304263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.304302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.320223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.320289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.336622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.336686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:20.856 [2024-12-09 09:55:28.354842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:20.856 [2024-12-09 09:55:28.354894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.114 [2024-12-09 09:55:28.369972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.370037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.379536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.379577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.391744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.391808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.408112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.408155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.425150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.425384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.441598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.441665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.451469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.451514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.467158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.467228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.477654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.477703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.489522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.489593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.504924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.505160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.521037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.521105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.530982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.531020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.546873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.546937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.563211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.563277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.580732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.580951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.591093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.591156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.115 [2024-12-09 09:55:28.606113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.115 [2024-12-09 09:55:28.606163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.623251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.623315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.638481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.638551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.656509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.656558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.673006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.673065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.689355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.689421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.705012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.705072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.714035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.714073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.727969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.728010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.741681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.741723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 10867.67 IOPS, 84.90 MiB/s [2024-12-09T09:55:28.876Z] [2024-12-09 09:55:28.757460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.757519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.776561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.776647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.791868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.791969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.808121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.808187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.825288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.825354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.842157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.842208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.374 [2024-12-09 09:55:28.858540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.374 [2024-12-09 09:55:28.858593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:28.877012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:28.877080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:28.891318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:28.891381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:28.906933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:28.907005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:28.917368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:28.917417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:28.933661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:28.933708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:28.949674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:28.949728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:28.966214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:28.966254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:28.983597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:28.983636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:28.999532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:28.999589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:29.017948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:29.018002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:29.032883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:29.032924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:29.043086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:29.043123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:29.059821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:29.059875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:29.075243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:29.075293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:29.091418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:29.091458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:29.108600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:29.108640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.633 [2024-12-09 09:55:29.125363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.633 [2024-12-09 09:55:29.125411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.142348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.142403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.158051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.158088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.167510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.167548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.184622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.184682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.199898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.199939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.209744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.209786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.226938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.227005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.242888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.242929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.252535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.252573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.269487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.269529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.286446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.286513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.300254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.300295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.316453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.316491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.334824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.334862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.350077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.350113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.367647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.367685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:21.892 [2024-12-09 09:55:29.382339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:21.892 [2024-12-09 09:55:29.382409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.398633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.398671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.415012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.415047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.431628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.431663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.447714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.447751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.465119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.465156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.481596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.481634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.497349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.497387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.506999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.507035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.523254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.523292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.540510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.540548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.556946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.556996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.573196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.573232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.589997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.590036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.606712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.606750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.624826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.624863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.150 [2024-12-09 09:55:29.639474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.150 [2024-12-09 09:55:29.639510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.655559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.655599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.672822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.672860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.689201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.689239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.706876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.706921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.721834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.721872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.737756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.737794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 10925.75 IOPS, 85.36 MiB/s [2024-12-09T09:55:29.911Z] [2024-12-09 09:55:29.755924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.755974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.771596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.771634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.789065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.789101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.805732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.805768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.821995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.822029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.839451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.839490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.855308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.855347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.871987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.872027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.889330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.889370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.409 [2024-12-09 09:55:29.904394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.409 [2024-12-09 09:55:29.904431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:29.913308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:29.913345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:29.930065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:29.930103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:29.946743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:29.946783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:29.962416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:29.962457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:29.980068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:29.980119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:29.994395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:29.994434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.012142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.012195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.028449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.028500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.042363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.042423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.058693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.058733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.074660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.074698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.092476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.092514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.107192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.107230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.122515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.122553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.132803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.132840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.148136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.148173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.668 [2024-12-09 09:55:30.164973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.668 [2024-12-09 09:55:30.165022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.181698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.181736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.197572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.197610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.207011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.207049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.223072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.223111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.239338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.239376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.257592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.257630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.272491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.272543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.282478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.282532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.298184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.298221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.314581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.314619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.330522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.330560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.348182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.348217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.363291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.363329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.381060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.926 [2024-12-09 09:55:30.381100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.926 [2024-12-09 09:55:30.396308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.927 [2024-12-09 09:55:30.396346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.927 [2024-12-09 09:55:30.405718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.927 [2024-12-09 09:55:30.405756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:22.927 [2024-12-09 09:55:30.422652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:22.927 [2024-12-09 09:55:30.422690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.439387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.439424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.455769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.455807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.473088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.473127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.489103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.489140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.507190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.507227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.521628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.521668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.537475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.537513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.555697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.555736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.571525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.571563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.587790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.587827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.605638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.605676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.620720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.620759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.638598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.638636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.653547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.653585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.185 [2024-12-09 09:55:30.669507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.185 [2024-12-09 09:55:30.669545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.687185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.687223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.703286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.703325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.713839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.713887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.728360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.728398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.744890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.744928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 11010.40 IOPS, 86.02 MiB/s [2024-12-09T09:55:30.946Z] [2024-12-09 09:55:30.760460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.760510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 00:43:23.444 Latency(us) 00:43:23.444 [2024-12-09T09:55:30.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:23.444 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:43:23.444 Nvme1n1 : 5.01 11009.59 86.01 0.00 0.00 11610.46 4468.36 30027.40 00:43:23.444 [2024-12-09T09:55:30.946Z] =================================================================================================================== 00:43:23.444 [2024-12-09T09:55:30.946Z] Total : 11009.59 86.01 0.00 0.00 11610.46 4468.36 30027.40 00:43:23.444 [2024-12-09 09:55:30.769548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.769583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.781552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.781601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.793582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.793627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.805589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.805631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.817604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.817647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.829591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.829636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.841580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.841635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.853565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.853594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.865574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.444 [2024-12-09 09:55:30.865607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.444 [2024-12-09 09:55:30.877597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.445 [2024-12-09 09:55:30.877634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.445 [2024-12-09 09:55:30.889576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.445 [2024-12-09 09:55:30.889605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.445 [2024-12-09 09:55:30.901580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.445 [2024-12-09 09:55:30.901608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.445 [2024-12-09 09:55:30.913586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.445 [2024-12-09 09:55:30.913617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.445 [2024-12-09 09:55:30.925604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.445 [2024-12-09 09:55:30.925636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.445 [2024-12-09 09:55:30.937589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.445 [2024-12-09 09:55:30.937617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-12-09 09:55:30.949593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-12-09 09:55:30.949622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65350) - No such process 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65350 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:23.703 delay0 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.703 09:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:43:23.703 [2024-12-09 09:55:31.180127] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:43:30.267 Initializing NVMe Controllers 00:43:30.267 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:43:30.267 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:43:30.267 Initialization complete. Launching workers. 00:43:30.267 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 68 00:43:30.267 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 355, failed to submit 33 00:43:30.267 success 198, unsuccessful 157, failed 0 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:30.267 rmmod nvme_tcp 00:43:30.267 rmmod nvme_fabrics 00:43:30.267 rmmod nvme_keyring 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65207 ']' 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65207 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65207 ']' 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65207 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65207 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:30.267 killing process with pid 65207 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65207' 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65207 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65207 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:30.267 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:30.268 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:30.268 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:43:30.526 00:43:30.526 real 0m24.215s 00:43:30.526 user 0m39.577s 00:43:30.526 sys 0m6.813s 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.526 ************************************ 00:43:30.526 END TEST nvmf_zcopy 00:43:30.526 ************************************ 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:30.526 ************************************ 00:43:30.526 START TEST nvmf_nmic 00:43:30.526 ************************************ 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:43:30.526 * Looking for test storage... 00:43:30.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:43:30.526 09:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:30.786 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:30.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:30.786 --rc genhtml_branch_coverage=1 00:43:30.786 --rc genhtml_function_coverage=1 00:43:30.786 --rc genhtml_legend=1 00:43:30.786 --rc geninfo_all_blocks=1 00:43:30.786 --rc geninfo_unexecuted_blocks=1 00:43:30.786 00:43:30.786 ' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:30.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:30.787 --rc genhtml_branch_coverage=1 00:43:30.787 --rc genhtml_function_coverage=1 00:43:30.787 --rc genhtml_legend=1 00:43:30.787 --rc geninfo_all_blocks=1 00:43:30.787 --rc geninfo_unexecuted_blocks=1 00:43:30.787 00:43:30.787 ' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:30.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:30.787 --rc genhtml_branch_coverage=1 00:43:30.787 --rc genhtml_function_coverage=1 00:43:30.787 --rc genhtml_legend=1 00:43:30.787 --rc geninfo_all_blocks=1 00:43:30.787 --rc geninfo_unexecuted_blocks=1 00:43:30.787 00:43:30.787 ' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:30.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:30.787 --rc genhtml_branch_coverage=1 00:43:30.787 --rc genhtml_function_coverage=1 00:43:30.787 --rc genhtml_legend=1 00:43:30.787 --rc geninfo_all_blocks=1 00:43:30.787 --rc geninfo_unexecuted_blocks=1 00:43:30.787 00:43:30.787 ' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:30.787 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:30.787 Cannot find device "nvmf_init_br" 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:30.787 Cannot find device "nvmf_init_br2" 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:30.787 Cannot find device "nvmf_tgt_br" 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:30.787 Cannot find device "nvmf_tgt_br2" 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:30.787 Cannot find device "nvmf_init_br" 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:43:30.787 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:30.787 Cannot find device "nvmf_init_br2" 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:30.788 Cannot find device "nvmf_tgt_br" 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:30.788 Cannot find device "nvmf_tgt_br2" 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:30.788 Cannot find device "nvmf_br" 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:30.788 Cannot find device "nvmf_init_if" 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:30.788 Cannot find device "nvmf_init_if2" 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:30.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:30.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:30.788 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:31.047 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:31.047 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:43:31.047 00:43:31.047 --- 10.0.0.3 ping statistics --- 00:43:31.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:31.047 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:31.047 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:31.047 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:43:31.047 00:43:31.047 --- 10.0.0.4 ping statistics --- 00:43:31.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:31.047 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:31.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:31.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:43:31.047 00:43:31.047 --- 10.0.0.1 ping statistics --- 00:43:31.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:31.047 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:31.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:31.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:43:31.047 00:43:31.047 --- 10.0.0.2 ping statistics --- 00:43:31.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:31.047 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:43:31.047 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65719 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65719 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65719 ']' 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:31.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:31.048 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.307 [2024-12-09 09:55:38.557877] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:43:31.307 [2024-12-09 09:55:38.557991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:31.307 [2024-12-09 09:55:38.712942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:31.307 [2024-12-09 09:55:38.753866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:31.307 [2024-12-09 09:55:38.754146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:31.307 [2024-12-09 09:55:38.754248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:31.307 [2024-12-09 09:55:38.754363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:31.307 [2024-12-09 09:55:38.754451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:31.307 [2024-12-09 09:55:38.755553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:31.307 [2024-12-09 09:55:38.755628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:31.307 [2024-12-09 09:55:38.755783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:31.307 [2024-12-09 09:55:38.756034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:31.307 [2024-12-09 09:55:38.790344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:31.565 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:31.565 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 [2024-12-09 09:55:38.890927] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 Malloc0 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 [2024-12-09 09:55:38.950583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:31.566 test case1: single bdev can't be used in multiple subsystems 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 [2024-12-09 09:55:38.974415] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:43:31.566 [2024-12-09 09:55:38.974741] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:43:31.566 [2024-12-09 09:55:38.974878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.566 request: 00:43:31.566 { 00:43:31.566 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:43:31.566 "namespace": { 00:43:31.566 "bdev_name": "Malloc0", 00:43:31.566 "no_auto_visible": false, 00:43:31.566 "hide_metadata": false 00:43:31.566 }, 00:43:31.566 "method": "nvmf_subsystem_add_ns", 00:43:31.566 "req_id": 1 00:43:31.566 } 00:43:31.566 Got JSON-RPC error response 00:43:31.566 response: 00:43:31.566 { 00:43:31.566 "code": -32602, 00:43:31.566 "message": "Invalid parameters" 00:43:31.566 } 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:43:31.566 Adding namespace failed - expected result. 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:43:31.566 test case2: host connect to nvmf target in multiple paths 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:31.566 [2024-12-09 09:55:38.990632] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.566 09:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid=eacce601-ab17-4447-87df-663b0f4f5455 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:43:31.825 09:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid=eacce601-ab17-4447-87df-663b0f4f5455 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:43:31.825 09:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:43:31.825 09:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:43:31.825 09:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:31.825 09:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:43:31.825 09:55:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:43:34.357 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:34.357 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:34.357 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:34.357 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:43:34.357 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:34.357 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:43:34.357 09:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:34.357 [global] 00:43:34.357 thread=1 00:43:34.357 invalidate=1 00:43:34.357 rw=write 00:43:34.357 time_based=1 00:43:34.357 runtime=1 00:43:34.357 ioengine=libaio 00:43:34.357 direct=1 00:43:34.357 bs=4096 00:43:34.357 iodepth=1 00:43:34.357 norandommap=0 00:43:34.357 numjobs=1 00:43:34.357 00:43:34.357 verify_dump=1 00:43:34.357 verify_backlog=512 00:43:34.357 verify_state_save=0 00:43:34.357 do_verify=1 00:43:34.357 verify=crc32c-intel 00:43:34.357 [job0] 00:43:34.357 filename=/dev/nvme0n1 00:43:34.357 Could not set queue depth (nvme0n1) 00:43:34.357 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:34.357 fio-3.35 00:43:34.357 Starting 1 thread 00:43:35.299 00:43:35.299 job0: (groupid=0, jobs=1): err= 0: pid=65803: Mon Dec 9 09:55:42 2024 00:43:35.299 read: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec) 00:43:35.299 slat (nsec): min=13449, max=61405, avg=16921.53, stdev=4074.90 00:43:35.299 clat (usec): min=146, max=591, avg=182.86, stdev=21.60 00:43:35.299 lat (usec): min=161, max=618, avg=199.78, stdev=22.57 00:43:35.299 clat percentiles (usec): 00:43:35.299 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:43:35.299 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:43:35.299 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 215], 00:43:35.299 | 99.00th=[ 237], 99.50th=[ 247], 99.90th=[ 519], 99.95th=[ 545], 00:43:35.299 | 99.99th=[ 594] 00:43:35.299 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:43:35.299 slat (nsec): min=15358, max=92585, avg=24067.35, stdev=6568.13 00:43:35.299 clat (usec): min=90, max=257, avg=113.67, stdev=14.39 00:43:35.299 lat (usec): min=109, max=340, avg=137.74, stdev=17.31 00:43:35.299 clat percentiles (usec): 00:43:35.299 | 1.00th=[ 94], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 102], 00:43:35.299 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 114], 00:43:35.299 | 70.00th=[ 119], 80.00th=[ 124], 90.00th=[ 133], 95.00th=[ 141], 00:43:35.299 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 237], 00:43:35.299 | 99.99th=[ 258] 00:43:35.299 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:43:35.299 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:43:35.299 lat (usec) : 100=6.63%, 250=93.15%, 500=0.17%, 750=0.05% 00:43:35.299 cpu : usr=3.10%, sys=9.30%, ctx=5900, majf=0, minf=5 00:43:35.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:35.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:35.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:35.299 issued rwts: total=2828,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:35.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:35.299 00:43:35.299 Run status group 0 (all jobs): 00:43:35.299 READ: bw=11.0MiB/s (11.6MB/s), 11.0MiB/s-11.0MiB/s (11.6MB/s-11.6MB/s), io=11.0MiB (11.6MB), run=1001-1001msec 00:43:35.299 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:43:35.299 00:43:35.299 Disk stats (read/write): 00:43:35.299 nvme0n1: ios=2610/2705, merge=0/0, ticks=500/338, in_queue=838, util=91.48% 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:35.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:35.299 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:35.300 rmmod nvme_tcp 00:43:35.300 rmmod nvme_fabrics 00:43:35.300 rmmod nvme_keyring 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65719 ']' 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65719 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65719 ']' 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65719 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65719 00:43:35.300 killing process with pid 65719 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65719' 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65719 00:43:35.300 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65719 00:43:35.559 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:35.559 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:35.559 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:35.559 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:43:35.559 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:43:35.559 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:43:35.559 09:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:35.559 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:35.559 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:35.559 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:35.559 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:35.559 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:35.559 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:43:35.817 00:43:35.817 real 0m5.347s 00:43:35.817 user 0m15.643s 00:43:35.817 sys 0m2.269s 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:35.817 ************************************ 00:43:35.817 END TEST nvmf_nmic 00:43:35.817 ************************************ 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:35.817 ************************************ 00:43:35.817 START TEST nvmf_fio_target 00:43:35.817 ************************************ 00:43:35.817 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:43:36.077 * Looking for test storage... 00:43:36.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:36.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.077 --rc genhtml_branch_coverage=1 00:43:36.077 --rc genhtml_function_coverage=1 00:43:36.077 --rc genhtml_legend=1 00:43:36.077 --rc geninfo_all_blocks=1 00:43:36.077 --rc geninfo_unexecuted_blocks=1 00:43:36.077 00:43:36.077 ' 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:36.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.077 --rc genhtml_branch_coverage=1 00:43:36.077 --rc genhtml_function_coverage=1 00:43:36.077 --rc genhtml_legend=1 00:43:36.077 --rc geninfo_all_blocks=1 00:43:36.077 --rc geninfo_unexecuted_blocks=1 00:43:36.077 00:43:36.077 ' 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:36.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.077 --rc genhtml_branch_coverage=1 00:43:36.077 --rc genhtml_function_coverage=1 00:43:36.077 --rc genhtml_legend=1 00:43:36.077 --rc geninfo_all_blocks=1 00:43:36.077 --rc geninfo_unexecuted_blocks=1 00:43:36.077 00:43:36.077 ' 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:36.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:36.077 --rc genhtml_branch_coverage=1 00:43:36.077 --rc genhtml_function_coverage=1 00:43:36.077 --rc genhtml_legend=1 00:43:36.077 --rc geninfo_all_blocks=1 00:43:36.077 --rc geninfo_unexecuted_blocks=1 00:43:36.077 00:43:36.077 ' 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:36.077 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:36.078 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:36.078 Cannot find device "nvmf_init_br" 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:43:36.078 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:36.337 Cannot find device "nvmf_init_br2" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:36.337 Cannot find device "nvmf_tgt_br" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:36.337 Cannot find device "nvmf_tgt_br2" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:36.337 Cannot find device "nvmf_init_br" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:36.337 Cannot find device "nvmf_init_br2" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:36.337 Cannot find device "nvmf_tgt_br" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:36.337 Cannot find device "nvmf_tgt_br2" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:36.337 Cannot find device "nvmf_br" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:36.337 Cannot find device "nvmf_init_if" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:36.337 Cannot find device "nvmf_init_if2" 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:36.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:36.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:36.337 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:36.596 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:36.596 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:43:36.596 00:43:36.596 --- 10.0.0.3 ping statistics --- 00:43:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:36.596 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:36.596 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:36.596 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:43:36.596 00:43:36.596 --- 10.0.0.4 ping statistics --- 00:43:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:36.596 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:36.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:36.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:43:36.596 00:43:36.596 --- 10.0.0.1 ping statistics --- 00:43:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:36.596 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:36.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:36.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:43:36.596 00:43:36.596 --- 10.0.0.2 ping statistics --- 00:43:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:36.596 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66038 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66038 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66038 ']' 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:36.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:36.596 09:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.596 [2024-12-09 09:55:44.047651] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:43:36.596 [2024-12-09 09:55:44.047906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:36.855 [2024-12-09 09:55:44.208694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:36.855 [2024-12-09 09:55:44.256472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:36.855 [2024-12-09 09:55:44.256535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:36.855 [2024-12-09 09:55:44.256559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:36.855 [2024-12-09 09:55:44.256569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:36.855 [2024-12-09 09:55:44.256578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:36.855 [2024-12-09 09:55:44.257489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:36.855 [2024-12-09 09:55:44.257669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:36.855 [2024-12-09 09:55:44.258205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:36.855 [2024-12-09 09:55:44.258217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:36.855 [2024-12-09 09:55:44.292009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:37.114 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:37.114 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:43:37.114 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:37.114 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:37.114 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:37.114 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:37.114 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:37.372 [2024-12-09 09:55:44.692098] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:37.372 09:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:37.631 09:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:37.631 09:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:37.888 09:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:37.888 09:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:38.146 09:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:38.146 09:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:38.404 09:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:38.404 09:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:38.662 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:38.920 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:39.177 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:39.434 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:39.434 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:39.691 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:39.691 09:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:39.950 09:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:40.208 09:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:40.208 09:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:40.467 09:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:40.467 09:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:40.726 09:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:41.292 [2024-12-09 09:55:48.487145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:41.292 09:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:41.551 09:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:41.809 09:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid=eacce601-ab17-4447-87df-663b0f4f5455 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:43:41.809 09:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:41.809 09:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:43:41.809 09:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:41.809 09:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:43:41.809 09:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:43:41.809 09:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:43:43.766 09:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:43.766 09:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:43.766 09:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:43.766 09:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:43:43.766 09:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:43.766 09:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:43:43.766 09:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:43.766 [global] 00:43:43.766 thread=1 00:43:43.766 invalidate=1 00:43:43.766 rw=write 00:43:43.766 time_based=1 00:43:43.766 runtime=1 00:43:43.766 ioengine=libaio 00:43:43.766 direct=1 00:43:43.766 bs=4096 00:43:43.766 iodepth=1 00:43:43.766 norandommap=0 00:43:43.766 numjobs=1 00:43:43.766 00:43:43.766 verify_dump=1 00:43:43.766 verify_backlog=512 00:43:43.766 verify_state_save=0 00:43:43.766 do_verify=1 00:43:43.766 verify=crc32c-intel 00:43:43.766 [job0] 00:43:43.766 filename=/dev/nvme0n1 00:43:43.766 [job1] 00:43:43.766 filename=/dev/nvme0n2 00:43:44.025 [job2] 00:43:44.025 filename=/dev/nvme0n3 00:43:44.025 [job3] 00:43:44.025 filename=/dev/nvme0n4 00:43:44.025 Could not set queue depth (nvme0n1) 00:43:44.025 Could not set queue depth (nvme0n2) 00:43:44.025 Could not set queue depth (nvme0n3) 00:43:44.025 Could not set queue depth (nvme0n4) 00:43:44.025 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:44.025 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:44.025 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:44.025 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:44.025 fio-3.35 00:43:44.025 Starting 4 threads 00:43:45.402 00:43:45.402 job0: (groupid=0, jobs=1): err= 0: pid=66221: Mon Dec 9 09:55:52 2024 00:43:45.402 read: IOPS=1872, BW=7489KiB/s (7668kB/s)(7496KiB/1001msec) 00:43:45.402 slat (nsec): min=8109, max=48260, avg=12304.90, stdev=4865.60 00:43:45.402 clat (usec): min=186, max=356, avg=264.51, stdev=17.39 00:43:45.402 lat (usec): min=201, max=368, avg=276.81, stdev=18.41 00:43:45.402 clat percentiles (usec): 00:43:45.402 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:43:45.402 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:43:45.402 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 293], 00:43:45.402 | 99.00th=[ 314], 99.50th=[ 318], 99.90th=[ 351], 99.95th=[ 355], 00:43:45.402 | 99.99th=[ 355] 00:43:45.402 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:43:45.402 slat (usec): min=11, max=391, avg=19.75, stdev=11.08 00:43:45.402 clat (usec): min=108, max=604, avg=212.30, stdev=20.94 00:43:45.402 lat (usec): min=169, max=708, avg=232.05, stdev=24.42 00:43:45.402 clat percentiles (usec): 00:43:45.402 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:43:45.402 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 215], 00:43:45.402 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 241], 00:43:45.402 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 416], 99.95th=[ 490], 00:43:45.402 | 99.99th=[ 603] 00:43:45.402 bw ( KiB/s): min= 8192, max= 8192, per=21.07%, avg=8192.00, stdev= 0.00, samples=1 00:43:45.402 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:45.402 lat (usec) : 250=60.50%, 500=39.47%, 750=0.03% 00:43:45.402 cpu : usr=2.30%, sys=4.50%, ctx=3926, majf=0, minf=7 00:43:45.402 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:45.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:45.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:45.402 issued rwts: total=1874,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:45.402 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:45.402 job1: (groupid=0, jobs=1): err= 0: pid=66222: Mon Dec 9 09:55:52 2024 00:43:45.402 read: IOPS=2811, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:43:45.402 slat (nsec): min=12314, max=58906, avg=16022.67, stdev=4173.37 00:43:45.402 clat (usec): min=140, max=490, avg=169.73, stdev=15.75 00:43:45.402 lat (usec): min=154, max=516, avg=185.75, stdev=17.09 00:43:45.402 clat percentiles (usec): 00:43:45.402 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:43:45.402 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:43:45.402 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:43:45.402 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 310], 99.95th=[ 449], 00:43:45.402 | 99.99th=[ 490] 00:43:45.402 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:43:45.402 slat (usec): min=15, max=201, avg=24.49, stdev= 8.04 00:43:45.402 clat (usec): min=95, max=357, avg=126.99, stdev=18.06 00:43:45.402 lat (usec): min=114, max=526, avg=151.48, stdev=21.78 00:43:45.402 clat percentiles (usec): 00:43:45.402 | 1.00th=[ 101], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 117], 00:43:45.402 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 128], 00:43:45.402 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:43:45.402 | 99.00th=[ 196], 99.50th=[ 225], 99.90th=[ 326], 99.95th=[ 334], 00:43:45.402 | 99.99th=[ 359] 00:43:45.402 bw ( KiB/s): min=12288, max=12288, per=31.61%, avg=12288.00, stdev= 0.00, samples=1 00:43:45.402 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:43:45.402 lat (usec) : 100=0.37%, 250=99.32%, 500=0.31% 00:43:45.402 cpu : usr=2.80%, sys=9.40%, ctx=5887, majf=0, minf=11 00:43:45.402 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:45.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:45.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:45.402 issued rwts: total=2814,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:45.402 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:45.402 job2: (groupid=0, jobs=1): err= 0: pid=66223: Mon Dec 9 09:55:52 2024 00:43:45.402 read: IOPS=2521, BW=9.85MiB/s (10.3MB/s)(9.86MiB/1001msec) 00:43:45.402 slat (nsec): min=11827, max=33597, avg=14069.68, stdev=2114.37 00:43:45.402 clat (usec): min=147, max=1968, avg=201.35, stdev=53.17 00:43:45.402 lat (usec): min=160, max=1991, avg=215.42, stdev=53.72 00:43:45.402 clat percentiles (usec): 00:43:45.402 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:43:45.402 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:43:45.402 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 255], 00:43:45.402 | 99.00th=[ 375], 99.50th=[ 404], 99.90th=[ 594], 99.95th=[ 644], 00:43:45.402 | 99.99th=[ 1975] 00:43:45.403 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:43:45.403 slat (nsec): min=14935, max=86563, avg=20441.47, stdev=4667.96 00:43:45.403 clat (usec): min=102, max=5952, avg=154.27, stdev=155.86 00:43:45.403 lat (usec): min=122, max=5997, avg=174.72, stdev=157.02 00:43:45.403 clat percentiles (usec): 00:43:45.403 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 120], 20.00th=[ 127], 00:43:45.403 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 147], 00:43:45.403 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 178], 95.00th=[ 212], 00:43:45.403 | 99.00th=[ 281], 99.50th=[ 388], 99.90th=[ 3228], 99.95th=[ 3359], 00:43:45.403 | 99.99th=[ 5932] 00:43:45.403 bw ( KiB/s): min=12288, max=12288, per=31.61%, avg=12288.00, stdev= 0.00, samples=1 00:43:45.403 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:43:45.403 lat (usec) : 250=95.81%, 500=4.03%, 750=0.06% 00:43:45.403 lat (msec) : 2=0.02%, 4=0.06%, 10=0.02% 00:43:45.403 cpu : usr=1.90%, sys=7.00%, ctx=5088, majf=0, minf=10 00:43:45.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:45.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:45.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:45.403 issued rwts: total=2524,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:45.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:45.403 job3: (groupid=0, jobs=1): err= 0: pid=66224: Mon Dec 9 09:55:52 2024 00:43:45.403 read: IOPS=1873, BW=7493KiB/s (7672kB/s)(7500KiB/1001msec) 00:43:45.403 slat (nsec): min=9922, max=45403, avg=14138.70, stdev=3414.36 00:43:45.403 clat (usec): min=178, max=337, avg=262.35, stdev=18.32 00:43:45.403 lat (usec): min=198, max=356, avg=276.49, stdev=18.60 00:43:45.403 clat percentiles (usec): 00:43:45.403 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:43:45.403 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:43:45.403 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:43:45.403 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 338], 99.95th=[ 338], 00:43:45.403 | 99.99th=[ 338] 00:43:45.403 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:43:45.403 slat (nsec): min=15686, max=82859, avg=24217.75, stdev=6504.45 00:43:45.403 clat (usec): min=133, max=685, avg=207.28, stdev=21.79 00:43:45.403 lat (usec): min=170, max=720, avg=231.50, stdev=23.17 00:43:45.403 clat percentiles (usec): 00:43:45.403 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:43:45.403 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:43:45.403 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 229], 95.00th=[ 235], 00:43:45.403 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 570], 00:43:45.403 | 99.99th=[ 685] 00:43:45.403 bw ( KiB/s): min= 8192, max= 8192, per=21.07%, avg=8192.00, stdev= 0.00, samples=1 00:43:45.403 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:45.403 lat (usec) : 250=64.08%, 500=35.87%, 750=0.05% 00:43:45.403 cpu : usr=2.00%, sys=6.40%, ctx=3923, majf=0, minf=9 00:43:45.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:45.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:45.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:45.403 issued rwts: total=1875,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:45.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:45.403 00:43:45.403 Run status group 0 (all jobs): 00:43:45.403 READ: bw=35.5MiB/s (37.2MB/s), 7489KiB/s-11.0MiB/s (7668kB/s-11.5MB/s), io=35.5MiB (37.2MB), run=1001-1001msec 00:43:45.403 WRITE: bw=38.0MiB/s (39.8MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=38.0MiB (39.8MB), run=1001-1001msec 00:43:45.403 00:43:45.403 Disk stats (read/write): 00:43:45.403 nvme0n1: ios=1586/1862, merge=0/0, ticks=403/364, in_queue=767, util=87.88% 00:43:45.403 nvme0n2: ios=2513/2560, merge=0/0, ticks=456/331, in_queue=787, util=90.27% 00:43:45.403 nvme0n3: ios=2048/2438, merge=0/0, ticks=420/385, in_queue=805, util=88.64% 00:43:45.403 nvme0n4: ios=1563/1862, merge=0/0, ticks=421/401, in_queue=822, util=90.03% 00:43:45.403 09:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:45.403 [global] 00:43:45.403 thread=1 00:43:45.403 invalidate=1 00:43:45.403 rw=randwrite 00:43:45.403 time_based=1 00:43:45.403 runtime=1 00:43:45.403 ioengine=libaio 00:43:45.403 direct=1 00:43:45.403 bs=4096 00:43:45.403 iodepth=1 00:43:45.403 norandommap=0 00:43:45.403 numjobs=1 00:43:45.403 00:43:45.403 verify_dump=1 00:43:45.403 verify_backlog=512 00:43:45.403 verify_state_save=0 00:43:45.403 do_verify=1 00:43:45.403 verify=crc32c-intel 00:43:45.403 [job0] 00:43:45.403 filename=/dev/nvme0n1 00:43:45.403 [job1] 00:43:45.403 filename=/dev/nvme0n2 00:43:45.403 [job2] 00:43:45.403 filename=/dev/nvme0n3 00:43:45.403 [job3] 00:43:45.403 filename=/dev/nvme0n4 00:43:45.403 Could not set queue depth (nvme0n1) 00:43:45.403 Could not set queue depth (nvme0n2) 00:43:45.403 Could not set queue depth (nvme0n3) 00:43:45.403 Could not set queue depth (nvme0n4) 00:43:45.403 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:45.403 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:45.403 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:45.403 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:45.403 fio-3.35 00:43:45.403 Starting 4 threads 00:43:46.779 00:43:46.779 job0: (groupid=0, jobs=1): err= 0: pid=66282: Mon Dec 9 09:55:53 2024 00:43:46.779 read: IOPS=2613, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:43:46.779 slat (usec): min=12, max=112, avg=18.06, stdev= 5.61 00:43:46.779 clat (usec): min=141, max=522, avg=172.47, stdev=17.40 00:43:46.779 lat (usec): min=159, max=564, avg=190.53, stdev=18.75 00:43:46.779 clat percentiles (usec): 00:43:46.779 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:43:46.779 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:43:46.779 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 200], 00:43:46.779 | 99.00th=[ 223], 99.50th=[ 237], 99.90th=[ 383], 99.95th=[ 433], 00:43:46.779 | 99.99th=[ 523] 00:43:46.779 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:43:46.779 slat (usec): min=15, max=122, avg=25.57, stdev= 6.72 00:43:46.779 clat (usec): min=102, max=1538, avg=133.42, stdev=32.53 00:43:46.779 lat (usec): min=122, max=1558, avg=158.99, stdev=33.19 00:43:46.779 clat percentiles (usec): 00:43:46.779 | 1.00th=[ 110], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 121], 00:43:46.779 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:43:46.779 | 70.00th=[ 137], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:43:46.779 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 408], 99.95th=[ 709], 00:43:46.779 | 99.99th=[ 1532] 00:43:46.779 bw ( KiB/s): min=12288, max=12288, per=25.11%, avg=12288.00, stdev= 0.00, samples=1 00:43:46.779 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:43:46.779 lat (usec) : 250=99.77%, 500=0.18%, 750=0.04% 00:43:46.779 lat (msec) : 2=0.02% 00:43:46.779 cpu : usr=2.60%, sys=10.20%, ctx=5689, majf=0, minf=13 00:43:46.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:46.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.779 issued rwts: total=2616,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:46.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:46.779 job1: (groupid=0, jobs=1): err= 0: pid=66283: Mon Dec 9 09:55:53 2024 00:43:46.779 read: IOPS=2871, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec) 00:43:46.779 slat (nsec): min=11794, max=53847, avg=13971.73, stdev=2488.59 00:43:46.779 clat (usec): min=136, max=1509, avg=170.49, stdev=40.61 00:43:46.780 lat (usec): min=149, max=1523, avg=184.46, stdev=40.84 00:43:46.780 clat percentiles (usec): 00:43:46.780 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:43:46.780 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:43:46.780 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 200], 95.00th=[ 217], 00:43:46.780 | 99.00th=[ 241], 99.50th=[ 273], 99.90th=[ 881], 99.95th=[ 1004], 00:43:46.780 | 99.99th=[ 1516] 00:43:46.780 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:43:46.780 slat (nsec): min=13826, max=74913, avg=20010.89, stdev=4406.40 00:43:46.780 clat (usec): min=91, max=394, avg=129.57, stdev=17.51 00:43:46.780 lat (usec): min=110, max=412, avg=149.58, stdev=18.84 00:43:46.780 clat percentiles (usec): 00:43:46.780 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 118], 00:43:46.780 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 131], 00:43:46.780 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 157], 00:43:46.780 | 99.00th=[ 180], 99.50th=[ 194], 99.90th=[ 293], 99.95th=[ 367], 00:43:46.780 | 99.99th=[ 396] 00:43:46.780 bw ( KiB/s): min=12288, max=12288, per=25.11%, avg=12288.00, stdev= 0.00, samples=1 00:43:46.780 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:43:46.780 lat (usec) : 100=0.29%, 250=99.26%, 500=0.39%, 750=0.02%, 1000=0.02% 00:43:46.780 lat (msec) : 2=0.03% 00:43:46.780 cpu : usr=2.60%, sys=7.60%, ctx=5948, majf=0, minf=13 00:43:46.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:46.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.780 issued rwts: total=2874,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:46.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:46.780 job2: (groupid=0, jobs=1): err= 0: pid=66284: Mon Dec 9 09:55:53 2024 00:43:46.780 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:43:46.780 slat (usec): min=12, max=132, avg=14.80, stdev= 4.97 00:43:46.780 clat (usec): min=99, max=301, avg=179.61, stdev=14.52 00:43:46.780 lat (usec): min=162, max=314, avg=194.41, stdev=15.17 00:43:46.780 clat percentiles (usec): 00:43:46.780 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 169], 00:43:46.780 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:43:46.780 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:43:46.780 | 99.00th=[ 225], 99.50th=[ 239], 99.90th=[ 262], 99.95th=[ 289], 00:43:46.780 | 99.99th=[ 302] 00:43:46.780 write: IOPS=3024, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec); 0 zone resets 00:43:46.780 slat (nsec): min=14995, max=75222, avg=21173.02, stdev=4519.71 00:43:46.780 clat (usec): min=106, max=485, avg=141.22, stdev=14.16 00:43:46.780 lat (usec): min=126, max=505, avg=162.39, stdev=14.76 00:43:46.780 clat percentiles (usec): 00:43:46.780 | 1.00th=[ 119], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:43:46.780 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:43:46.780 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:43:46.780 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 221], 99.95th=[ 297], 00:43:46.780 | 99.99th=[ 486] 00:43:46.780 bw ( KiB/s): min=12288, max=12288, per=25.11%, avg=12288.00, stdev= 0.00, samples=1 00:43:46.780 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:43:46.780 lat (usec) : 100=0.02%, 250=99.84%, 500=0.14% 00:43:46.780 cpu : usr=1.70%, sys=8.90%, ctx=5593, majf=0, minf=13 00:43:46.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:46.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.780 issued rwts: total=2560,3028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:46.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:46.780 job3: (groupid=0, jobs=1): err= 0: pid=66285: Mon Dec 9 09:55:53 2024 00:43:46.780 read: IOPS=2563, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:43:46.780 slat (nsec): min=12264, max=44996, avg=15572.26, stdev=3876.53 00:43:46.780 clat (usec): min=150, max=267, avg=177.56, stdev=12.46 00:43:46.780 lat (usec): min=163, max=280, avg=193.13, stdev=13.51 00:43:46.780 clat percentiles (usec): 00:43:46.780 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:43:46.780 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:43:46.780 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:43:46.780 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 231], 99.95th=[ 237], 00:43:46.780 | 99.99th=[ 269] 00:43:46.780 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:43:46.780 slat (nsec): min=14392, max=87756, avg=21837.69, stdev=5474.83 00:43:46.780 clat (usec): min=105, max=1989, avg=138.78, stdev=49.61 00:43:46.780 lat (usec): min=125, max=2011, avg=160.62, stdev=49.90 00:43:46.780 clat percentiles (usec): 00:43:46.780 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:43:46.780 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:43:46.780 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:43:46.780 | 99.00th=[ 174], 99.50th=[ 188], 99.90th=[ 506], 99.95th=[ 1876], 00:43:46.780 | 99.99th=[ 1991] 00:43:46.780 bw ( KiB/s): min=12288, max=12288, per=25.11%, avg=12288.00, stdev= 0.00, samples=1 00:43:46.780 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:43:46.780 lat (usec) : 250=99.86%, 500=0.05%, 750=0.05% 00:43:46.780 lat (msec) : 2=0.04% 00:43:46.780 cpu : usr=2.30%, sys=8.50%, ctx=5638, majf=0, minf=9 00:43:46.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:46.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.780 issued rwts: total=2566,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:46.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:46.780 00:43:46.780 Run status group 0 (all jobs): 00:43:46.780 READ: bw=41.4MiB/s (43.4MB/s), 9.99MiB/s-11.2MiB/s (10.5MB/s-11.8MB/s), io=41.5MiB (43.5MB), run=1001-1001msec 00:43:46.780 WRITE: bw=47.8MiB/s (50.1MB/s), 11.8MiB/s-12.0MiB/s (12.4MB/s-12.6MB/s), io=47.8MiB (50.2MB), run=1001-1001msec 00:43:46.780 00:43:46.780 Disk stats (read/write): 00:43:46.780 nvme0n1: ios=2419/2560, merge=0/0, ticks=410/352, in_queue=762, util=86.77% 00:43:46.780 nvme0n2: ios=2539/2560, merge=0/0, ticks=461/347, in_queue=808, util=88.52% 00:43:46.780 nvme0n3: ios=2224/2560, merge=0/0, ticks=406/386, in_queue=792, util=89.07% 00:43:46.780 nvme0n4: ios=2259/2560, merge=0/0, ticks=416/376, in_queue=792, util=89.73% 00:43:46.780 09:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:46.780 [global] 00:43:46.780 thread=1 00:43:46.780 invalidate=1 00:43:46.780 rw=write 00:43:46.780 time_based=1 00:43:46.780 runtime=1 00:43:46.780 ioengine=libaio 00:43:46.780 direct=1 00:43:46.780 bs=4096 00:43:46.780 iodepth=128 00:43:46.780 norandommap=0 00:43:46.780 numjobs=1 00:43:46.780 00:43:46.780 verify_dump=1 00:43:46.780 verify_backlog=512 00:43:46.780 verify_state_save=0 00:43:46.780 do_verify=1 00:43:46.780 verify=crc32c-intel 00:43:46.780 [job0] 00:43:46.780 filename=/dev/nvme0n1 00:43:46.780 [job1] 00:43:46.780 filename=/dev/nvme0n2 00:43:46.780 [job2] 00:43:46.780 filename=/dev/nvme0n3 00:43:46.780 [job3] 00:43:46.780 filename=/dev/nvme0n4 00:43:46.780 Could not set queue depth (nvme0n1) 00:43:46.780 Could not set queue depth (nvme0n2) 00:43:46.780 Could not set queue depth (nvme0n3) 00:43:46.780 Could not set queue depth (nvme0n4) 00:43:46.780 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:46.780 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:46.780 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:46.780 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:46.780 fio-3.35 00:43:46.780 Starting 4 threads 00:43:48.163 00:43:48.163 job0: (groupid=0, jobs=1): err= 0: pid=66346: Mon Dec 9 09:55:55 2024 00:43:48.163 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:43:48.163 slat (usec): min=6, max=14465, avg=226.32, stdev=961.32 00:43:48.163 clat (usec): min=15009, max=58389, avg=26868.47, stdev=10024.11 00:43:48.163 lat (usec): min=15041, max=58409, avg=27094.79, stdev=10107.31 00:43:48.163 clat percentiles (usec): 00:43:48.163 | 1.00th=[15795], 5.00th=[17171], 10.00th=[18220], 20.00th=[20579], 00:43:48.163 | 30.00th=[21103], 40.00th=[21890], 50.00th=[22938], 60.00th=[22938], 00:43:48.163 | 70.00th=[25560], 80.00th=[38011], 90.00th=[44827], 95.00th=[49021], 00:43:48.163 | 99.00th=[55313], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:43:48.163 | 99.99th=[58459] 00:43:48.163 write: IOPS=2513, BW=9.82MiB/s (10.3MB/s)(9.87MiB/1005msec); 0 zone resets 00:43:48.163 slat (usec): min=10, max=10188, avg=204.24, stdev=824.01 00:43:48.163 clat (usec): min=3947, max=91478, avg=28230.18, stdev=14948.64 00:43:48.163 lat (usec): min=3983, max=93420, avg=28434.42, stdev=15031.90 00:43:48.163 clat percentiles (usec): 00:43:48.163 | 1.00th=[10683], 5.00th=[17695], 10.00th=[20841], 20.00th=[21627], 00:43:48.163 | 30.00th=[21890], 40.00th=[22414], 50.00th=[22676], 60.00th=[23462], 00:43:48.163 | 70.00th=[26084], 80.00th=[27657], 90.00th=[47973], 95.00th=[67634], 00:43:48.163 | 99.00th=[85459], 99.50th=[86508], 99.90th=[91751], 99.95th=[91751], 00:43:48.163 | 99.99th=[91751] 00:43:48.163 bw ( KiB/s): min= 6904, max=12312, per=16.84%, avg=9608.00, stdev=3824.03, samples=2 00:43:48.163 iops : min= 1726, max= 3078, avg=2402.00, stdev=956.01, samples=2 00:43:48.163 lat (msec) : 4=0.04%, 10=0.24%, 20=10.73%, 50=81.66%, 100=7.32% 00:43:48.163 cpu : usr=2.39%, sys=7.57%, ctx=427, majf=0, minf=15 00:43:48.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:43:48.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:48.163 issued rwts: total=2048,2526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:48.163 job1: (groupid=0, jobs=1): err= 0: pid=66347: Mon Dec 9 09:55:55 2024 00:43:48.163 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:43:48.163 slat (usec): min=4, max=6687, avg=171.85, stdev=738.62 00:43:48.163 clat (usec): min=10435, max=46069, avg=21722.48, stdev=6341.51 00:43:48.163 lat (usec): min=10457, max=46154, avg=21894.34, stdev=6408.34 00:43:48.163 clat percentiles (usec): 00:43:48.163 | 1.00th=[11994], 5.00th=[13566], 10.00th=[13698], 20.00th=[17957], 00:43:48.163 | 30.00th=[19792], 40.00th=[20317], 50.00th=[21103], 60.00th=[21627], 00:43:48.163 | 70.00th=[22676], 80.00th=[23462], 90.00th=[28443], 95.00th=[39060], 00:43:48.163 | 99.00th=[40109], 99.50th=[40109], 99.90th=[45876], 99.95th=[45876], 00:43:48.163 | 99.99th=[45876] 00:43:48.163 write: IOPS=3088, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1004msec); 0 zone resets 00:43:48.163 slat (usec): min=11, max=6393, avg=143.81, stdev=552.71 00:43:48.163 clat (usec): min=3501, max=29101, avg=19371.13, stdev=4939.79 00:43:48.163 lat (usec): min=3548, max=29135, avg=19514.93, stdev=4981.12 00:43:48.163 clat percentiles (usec): 00:43:48.163 | 1.00th=[10028], 5.00th=[10683], 10.00th=[10945], 20.00th=[14222], 00:43:48.163 | 30.00th=[16712], 40.00th=[20317], 50.00th=[21365], 60.00th=[21890], 00:43:48.163 | 70.00th=[22414], 80.00th=[22938], 90.00th=[24773], 95.00th=[26084], 00:43:48.163 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28967], 99.95th=[28967], 00:43:48.163 | 99.99th=[29230] 00:43:48.163 bw ( KiB/s): min=12288, max=12288, per=21.54%, avg=12288.00, stdev= 0.00, samples=2 00:43:48.163 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:43:48.164 lat (msec) : 4=0.21%, 10=0.18%, 20=37.71%, 50=61.90% 00:43:48.164 cpu : usr=2.99%, sys=9.07%, ctx=374, majf=0, minf=13 00:43:48.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:43:48.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:48.164 issued rwts: total=3072,3101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:48.164 job2: (groupid=0, jobs=1): err= 0: pid=66348: Mon Dec 9 09:55:55 2024 00:43:48.164 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:43:48.164 slat (usec): min=5, max=7240, avg=95.31, stdev=458.83 00:43:48.164 clat (usec): min=2548, max=19485, avg=12567.65, stdev=1729.77 00:43:48.164 lat (usec): min=2561, max=19498, avg=12662.96, stdev=1681.90 00:43:48.164 clat percentiles (usec): 00:43:48.164 | 1.00th=[ 5932], 5.00th=[10814], 10.00th=[11207], 20.00th=[11469], 00:43:48.164 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12780], 60.00th=[13042], 00:43:48.164 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14222], 95.00th=[14877], 00:43:48.164 | 99.00th=[17433], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:43:48.164 | 99.99th=[19530] 00:43:48.164 write: IOPS=5105, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:43:48.164 slat (usec): min=9, max=5825, avg=92.58, stdev=392.43 00:43:48.164 clat (usec): min=283, max=17605, avg=12162.47, stdev=1354.00 00:43:48.164 lat (usec): min=2543, max=17630, avg=12255.05, stdev=1301.12 00:43:48.164 clat percentiles (usec): 00:43:48.164 | 1.00th=[ 9372], 5.00th=[10552], 10.00th=[10683], 20.00th=[10945], 00:43:48.164 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12256], 60.00th=[12649], 00:43:48.164 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[13960], 00:43:48.164 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:43:48.164 | 99.99th=[17695] 00:43:48.164 bw ( KiB/s): min=20480, max=20521, per=35.94%, avg=20500.50, stdev=28.99, samples=2 00:43:48.164 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:43:48.164 lat (usec) : 500=0.01% 00:43:48.164 lat (msec) : 4=0.31%, 10=2.12%, 20=97.56% 00:43:48.164 cpu : usr=4.59%, sys=14.27%, ctx=323, majf=0, minf=7 00:43:48.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:48.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:48.164 issued rwts: total=5120,5121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:48.164 job3: (groupid=0, jobs=1): err= 0: pid=66349: Mon Dec 9 09:55:55 2024 00:43:48.164 read: IOPS=3067, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1005msec) 00:43:48.164 slat (usec): min=7, max=14486, avg=149.98, stdev=709.31 00:43:48.164 clat (usec): min=1115, max=69160, avg=18245.81, stdev=11537.76 00:43:48.164 lat (usec): min=4543, max=69182, avg=18395.79, stdev=11639.67 00:43:48.164 clat percentiles (usec): 00:43:48.164 | 1.00th=[10421], 5.00th=[12518], 10.00th=[12780], 20.00th=[12911], 00:43:48.164 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:43:48.164 | 70.00th=[13829], 80.00th=[15270], 90.00th=[39584], 95.00th=[47449], 00:43:48.164 | 99.00th=[56361], 99.50th=[56361], 99.90th=[58459], 99.95th=[61080], 00:43:48.164 | 99.99th=[68682] 00:43:48.164 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:43:48.164 slat (usec): min=8, max=9257, avg=143.48, stdev=672.47 00:43:48.164 clat (usec): min=4674, max=93613, avg=19706.20, stdev=15079.58 00:43:48.164 lat (usec): min=4692, max=93643, avg=19849.68, stdev=15155.65 00:43:48.164 clat percentiles (usec): 00:43:48.164 | 1.00th=[10290], 5.00th=[12256], 10.00th=[12518], 20.00th=[12780], 00:43:48.164 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[14222], 00:43:48.164 | 70.00th=[14746], 80.00th=[25560], 90.00th=[28967], 95.00th=[56361], 00:43:48.164 | 99.00th=[82314], 99.50th=[88605], 99.90th=[93848], 99.95th=[93848], 00:43:48.164 | 99.99th=[93848] 00:43:48.164 bw ( KiB/s): min= 7256, max=20480, per=24.31%, avg=13868.00, stdev=9350.78, samples=2 00:43:48.164 iops : min= 1814, max= 5120, avg=3467.00, stdev=2337.70, samples=2 00:43:48.164 lat (msec) : 2=0.01%, 10=0.58%, 20=78.70%, 50=15.61%, 100=5.08% 00:43:48.164 cpu : usr=2.89%, sys=9.76%, ctx=346, majf=0, minf=18 00:43:48.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:43:48.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:48.164 issued rwts: total=3083,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:48.164 00:43:48.164 Run status group 0 (all jobs): 00:43:48.164 READ: bw=51.8MiB/s (54.3MB/s), 8151KiB/s-19.9MiB/s (8347kB/s-20.9MB/s), io=52.0MiB (54.6MB), run=1003-1005msec 00:43:48.164 WRITE: bw=55.7MiB/s (58.4MB/s), 9.82MiB/s-19.9MiB/s (10.3MB/s-20.9MB/s), io=56.0MiB (58.7MB), run=1003-1005msec 00:43:48.164 00:43:48.164 Disk stats (read/write): 00:43:48.164 nvme0n1: ios=1954/2048, merge=0/0, ticks=15194/16621, in_queue=31815, util=87.78% 00:43:48.164 nvme0n2: ios=2608/2986, merge=0/0, ticks=16953/17407, in_queue=34360, util=89.59% 00:43:48.164 nvme0n3: ios=4128/4608, merge=0/0, ticks=11961/11908, in_queue=23869, util=88.45% 00:43:48.164 nvme0n4: ios=2945/3072, merge=0/0, ticks=12170/12551, in_queue=24721, util=89.73% 00:43:48.164 09:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:48.164 [global] 00:43:48.164 thread=1 00:43:48.164 invalidate=1 00:43:48.164 rw=randwrite 00:43:48.164 time_based=1 00:43:48.164 runtime=1 00:43:48.164 ioengine=libaio 00:43:48.164 direct=1 00:43:48.164 bs=4096 00:43:48.164 iodepth=128 00:43:48.164 norandommap=0 00:43:48.164 numjobs=1 00:43:48.164 00:43:48.164 verify_dump=1 00:43:48.164 verify_backlog=512 00:43:48.164 verify_state_save=0 00:43:48.164 do_verify=1 00:43:48.164 verify=crc32c-intel 00:43:48.164 [job0] 00:43:48.164 filename=/dev/nvme0n1 00:43:48.164 [job1] 00:43:48.164 filename=/dev/nvme0n2 00:43:48.164 [job2] 00:43:48.164 filename=/dev/nvme0n3 00:43:48.164 [job3] 00:43:48.164 filename=/dev/nvme0n4 00:43:48.164 Could not set queue depth (nvme0n1) 00:43:48.164 Could not set queue depth (nvme0n2) 00:43:48.164 Could not set queue depth (nvme0n3) 00:43:48.164 Could not set queue depth (nvme0n4) 00:43:48.164 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:48.164 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:48.164 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:48.164 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:48.164 fio-3.35 00:43:48.164 Starting 4 threads 00:43:49.539 00:43:49.539 job0: (groupid=0, jobs=1): err= 0: pid=66402: Mon Dec 9 09:55:56 2024 00:43:49.539 read: IOPS=4627, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1003msec) 00:43:49.539 slat (usec): min=5, max=3686, avg=100.40, stdev=480.82 00:43:49.539 clat (usec): min=335, max=15217, avg=13245.86, stdev=1136.47 00:43:49.539 lat (usec): min=3092, max=15244, avg=13346.26, stdev=1033.10 00:43:49.539 clat percentiles (usec): 00:43:49.539 | 1.00th=[10028], 5.00th=[12387], 10.00th=[12649], 20.00th=[12911], 00:43:49.539 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:43:49.539 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[14615], 00:43:49.539 | 99.00th=[15008], 99.50th=[15008], 99.90th=[15139], 99.95th=[15139], 00:43:49.539 | 99.99th=[15270] 00:43:49.539 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:43:49.539 slat (usec): min=8, max=3518, avg=97.14, stdev=416.14 00:43:49.539 clat (usec): min=5745, max=15914, avg=12727.20, stdev=1178.76 00:43:49.539 lat (usec): min=5762, max=15935, avg=12824.33, stdev=1109.32 00:43:49.539 clat percentiles (usec): 00:43:49.539 | 1.00th=[ 9110], 5.00th=[11338], 10.00th=[11731], 20.00th=[11994], 00:43:49.539 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:43:49.539 | 70.00th=[13042], 80.00th=[13435], 90.00th=[14091], 95.00th=[15008], 00:43:49.539 | 99.00th=[15533], 99.50th=[15795], 99.90th=[15926], 99.95th=[15926], 00:43:49.539 | 99.99th=[15926] 00:43:49.539 bw ( KiB/s): min=19680, max=20480, per=26.13%, avg=20080.00, stdev=565.69, samples=2 00:43:49.539 iops : min= 4920, max= 5120, avg=5020.00, stdev=141.42, samples=2 00:43:49.539 lat (usec) : 500=0.01% 00:43:49.539 lat (msec) : 4=0.33%, 10=1.24%, 20=98.42% 00:43:49.539 cpu : usr=4.49%, sys=13.67%, ctx=348, majf=0, minf=13 00:43:49.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:49.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:49.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:49.539 issued rwts: total=4641,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:49.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:49.539 job1: (groupid=0, jobs=1): err= 0: pid=66403: Mon Dec 9 09:55:56 2024 00:43:49.539 read: IOPS=4627, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1003msec) 00:43:49.539 slat (usec): min=7, max=4304, avg=100.97, stdev=481.10 00:43:49.539 clat (usec): min=339, max=16260, avg=13268.98, stdev=1143.01 00:43:49.539 lat (usec): min=2805, max=16271, avg=13369.94, stdev=1039.87 00:43:49.539 clat percentiles (usec): 00:43:49.539 | 1.00th=[10290], 5.00th=[12518], 10.00th=[12780], 20.00th=[13042], 00:43:49.539 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13435], 00:43:49.539 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13960], 95.00th=[14091], 00:43:49.539 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16188], 99.95th=[16188], 00:43:49.539 | 99.99th=[16319] 00:43:49.539 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:43:49.539 slat (usec): min=10, max=3896, avg=96.59, stdev=414.33 00:43:49.539 clat (usec): min=5617, max=14856, avg=12710.09, stdev=878.63 00:43:49.539 lat (usec): min=5638, max=14873, avg=12806.68, stdev=777.44 00:43:49.539 clat percentiles (usec): 00:43:49.539 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:43:49.539 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:43:49.539 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:43:49.539 | 99.00th=[14222], 99.50th=[14746], 99.90th=[14877], 99.95th=[14877], 00:43:49.539 | 99.99th=[14877] 00:43:49.539 bw ( KiB/s): min=19680, max=20521, per=26.16%, avg=20100.50, stdev=594.68, samples=2 00:43:49.539 iops : min= 4920, max= 5130, avg=5025.00, stdev=148.49, samples=2 00:43:49.539 lat (usec) : 500=0.01% 00:43:49.539 lat (msec) : 4=0.33%, 10=0.94%, 20=98.72% 00:43:49.539 cpu : usr=4.19%, sys=13.97%, ctx=306, majf=0, minf=11 00:43:49.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:49.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:49.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:49.539 issued rwts: total=4641,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:49.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:49.539 job2: (groupid=0, jobs=1): err= 0: pid=66404: Mon Dec 9 09:55:56 2024 00:43:49.539 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:43:49.539 slat (usec): min=7, max=6135, avg=114.36, stdev=559.86 00:43:49.539 clat (usec): min=10983, max=19792, avg=15062.39, stdev=1026.60 00:43:49.539 lat (usec): min=13852, max=19803, avg=15176.75, stdev=871.53 00:43:49.539 clat percentiles (usec): 00:43:49.539 | 1.00th=[11731], 5.00th=[14091], 10.00th=[14222], 20.00th=[14615], 00:43:49.539 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[15270], 00:43:49.539 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15664], 95.00th=[15926], 00:43:49.539 | 99.00th=[19530], 99.50th=[19792], 99.90th=[19792], 99.95th=[19792], 00:43:49.539 | 99.99th=[19792] 00:43:49.539 write: IOPS=4572, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1001msec); 0 zone resets 00:43:49.539 slat (usec): min=9, max=5072, avg=108.91, stdev=485.19 00:43:49.539 clat (usec): min=331, max=16937, avg=14130.25, stdev=1394.53 00:43:49.539 lat (usec): min=2720, max=16951, avg=14239.16, stdev=1307.11 00:43:49.539 clat percentiles (usec): 00:43:49.539 | 1.00th=[ 6915], 5.00th=[12256], 10.00th=[13566], 20.00th=[13829], 00:43:49.539 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:43:49.539 | 70.00th=[14615], 80.00th=[14746], 90.00th=[15008], 95.00th=[15533], 00:43:49.539 | 99.00th=[16450], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:43:49.539 | 99.99th=[16909] 00:43:49.539 bw ( KiB/s): min=16937, max=18658, per=23.16%, avg=17797.50, stdev=1216.93, samples=2 00:43:49.539 iops : min= 4234, max= 4664, avg=4449.00, stdev=304.06, samples=2 00:43:49.539 lat (usec) : 500=0.01% 00:43:49.539 lat (msec) : 4=0.37%, 10=0.37%, 20=99.25% 00:43:49.539 cpu : usr=2.90%, sys=13.00%, ctx=274, majf=0, minf=17 00:43:49.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:49.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:49.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:49.539 issued rwts: total=4096,4577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:49.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:49.539 job3: (groupid=0, jobs=1): err= 0: pid=66405: Mon Dec 9 09:55:56 2024 00:43:49.539 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:43:49.539 slat (usec): min=6, max=7130, avg=117.77, stdev=565.91 00:43:49.539 clat (usec): min=9295, max=22928, avg=15232.86, stdev=1302.04 00:43:49.539 lat (usec): min=9311, max=22968, avg=15350.62, stdev=1336.10 00:43:49.539 clat percentiles (usec): 00:43:49.539 | 1.00th=[11076], 5.00th=[13042], 10.00th=[13698], 20.00th=[14353], 00:43:49.539 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:43:49.539 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16319], 95.00th=[16909], 00:43:49.539 | 99.00th=[19006], 99.50th=[19530], 99.90th=[22152], 99.95th=[22414], 00:43:49.539 | 99.99th=[22938] 00:43:49.540 write: IOPS=4483, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1006msec); 0 zone resets 00:43:49.540 slat (usec): min=10, max=6721, avg=106.98, stdev=620.64 00:43:49.540 clat (usec): min=5020, max=22329, avg=14340.92, stdev=1663.41 00:43:49.540 lat (usec): min=5061, max=22344, avg=14447.90, stdev=1759.41 00:43:49.540 clat percentiles (usec): 00:43:49.540 | 1.00th=[ 8979], 5.00th=[11731], 10.00th=[12649], 20.00th=[13435], 00:43:49.540 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:43:49.540 | 70.00th=[14746], 80.00th=[15270], 90.00th=[15795], 95.00th=[17171], 00:43:49.540 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21627], 99.95th=[21627], 00:43:49.540 | 99.99th=[22414] 00:43:49.540 bw ( KiB/s): min=17162, max=17900, per=22.81%, avg=17531.00, stdev=521.84, samples=2 00:43:49.540 iops : min= 4290, max= 4475, avg=4382.50, stdev=130.81, samples=2 00:43:49.540 lat (msec) : 10=1.06%, 20=98.54%, 50=0.41% 00:43:49.540 cpu : usr=4.08%, sys=12.14%, ctx=287, majf=0, minf=9 00:43:49.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:49.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:49.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:49.540 issued rwts: total=4096,4510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:49.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:49.540 00:43:49.540 Run status group 0 (all jobs): 00:43:49.540 READ: bw=67.9MiB/s (71.1MB/s), 15.9MiB/s-18.1MiB/s (16.7MB/s-19.0MB/s), io=68.3MiB (71.6MB), run=1001-1006msec 00:43:49.540 WRITE: bw=75.0MiB/s (78.7MB/s), 17.5MiB/s-19.9MiB/s (18.4MB/s-20.9MB/s), io=75.5MiB (79.2MB), run=1001-1006msec 00:43:49.540 00:43:49.540 Disk stats (read/write): 00:43:49.540 nvme0n1: ios=4145/4224, merge=0/0, ticks=12364/11526, in_queue=23890, util=87.65% 00:43:49.540 nvme0n2: ios=4143/4224, merge=0/0, ticks=12377/11545, in_queue=23922, util=89.64% 00:43:49.540 nvme0n3: ios=3584/3808, merge=0/0, ticks=12360/11828, in_queue=24188, util=88.77% 00:43:49.540 nvme0n4: ios=3601/3783, merge=0/0, ticks=26593/22612, in_queue=49205, util=89.97% 00:43:49.540 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:49.540 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66419 00:43:49.540 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:49.540 09:55:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:49.540 [global] 00:43:49.540 thread=1 00:43:49.540 invalidate=1 00:43:49.540 rw=read 00:43:49.540 time_based=1 00:43:49.540 runtime=10 00:43:49.540 ioengine=libaio 00:43:49.540 direct=1 00:43:49.540 bs=4096 00:43:49.540 iodepth=1 00:43:49.540 norandommap=1 00:43:49.540 numjobs=1 00:43:49.540 00:43:49.540 [job0] 00:43:49.540 filename=/dev/nvme0n1 00:43:49.540 [job1] 00:43:49.540 filename=/dev/nvme0n2 00:43:49.540 [job2] 00:43:49.540 filename=/dev/nvme0n3 00:43:49.540 [job3] 00:43:49.540 filename=/dev/nvme0n4 00:43:49.540 Could not set queue depth (nvme0n1) 00:43:49.540 Could not set queue depth (nvme0n2) 00:43:49.540 Could not set queue depth (nvme0n3) 00:43:49.540 Could not set queue depth (nvme0n4) 00:43:49.540 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:49.540 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:49.540 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:49.540 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:49.540 fio-3.35 00:43:49.540 Starting 4 threads 00:43:52.819 09:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:52.819 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40607744, buflen=4096 00:43:52.819 fio: pid=66472, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:52.819 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:53.078 fio: pid=66471, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:53.078 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=63057920, buflen=4096 00:43:53.078 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:53.078 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:53.645 fio: pid=66465, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:53.645 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=54816768, buflen=4096 00:43:53.645 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:53.645 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:53.903 fio: pid=66467, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:53.903 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16068608, buflen=4096 00:43:53.903 00:43:53.903 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66465: Mon Dec 9 09:56:01 2024 00:43:53.903 read: IOPS=3522, BW=13.8MiB/s (14.4MB/s)(52.3MiB/3800msec) 00:43:53.903 slat (usec): min=8, max=10784, avg=19.73, stdev=152.52 00:43:53.903 clat (usec): min=131, max=4179, avg=262.32, stdev=78.81 00:43:53.903 lat (usec): min=143, max=11119, avg=282.06, stdev=172.66 00:43:53.903 clat percentiles (usec): 00:43:53.903 | 1.00th=[ 141], 5.00th=[ 153], 10.00th=[ 165], 20.00th=[ 229], 00:43:53.903 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:43:53.903 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 363], 00:43:53.903 | 99.00th=[ 429], 99.50th=[ 449], 99.90th=[ 578], 99.95th=[ 840], 00:43:53.903 | 99.99th=[ 3425] 00:43:53.903 bw ( KiB/s): min=12176, max=16182, per=24.32%, avg=13644.29, stdev=1275.42, samples=7 00:43:53.903 iops : min= 3044, max= 4045, avg=3411.00, stdev=318.69, samples=7 00:43:53.903 lat (usec) : 250=29.86%, 500=69.95%, 750=0.13%, 1000=0.01% 00:43:53.903 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:43:53.903 cpu : usr=1.26%, sys=5.69%, ctx=13399, majf=0, minf=1 00:43:53.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.903 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.903 issued rwts: total=13384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:53.903 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66467: Mon Dec 9 09:56:01 2024 00:43:53.903 read: IOPS=4828, BW=18.9MiB/s (19.8MB/s)(79.3MiB/4206msec) 00:43:53.903 slat (usec): min=8, max=12305, avg=21.50, stdev=169.06 00:43:53.903 clat (usec): min=99, max=13617, avg=183.66, stdev=108.97 00:43:53.903 lat (usec): min=137, max=13641, avg=205.16, stdev=201.69 00:43:53.903 clat percentiles (usec): 00:43:53.903 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:43:53.903 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:43:53.903 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 225], 95.00th=[ 262], 00:43:53.903 | 99.00th=[ 343], 99.50th=[ 388], 99.90th=[ 494], 99.95th=[ 644], 00:43:53.903 | 99.99th=[ 2245] 00:43:53.903 bw ( KiB/s): min=14846, max=22200, per=34.53%, avg=19374.50, stdev=2075.42, samples=8 00:43:53.903 iops : min= 3711, max= 5550, avg=4843.50, stdev=519.02, samples=8 00:43:53.903 lat (usec) : 100=0.01%, 250=93.66%, 500=6.23%, 750=0.06%, 1000=0.01% 00:43:53.903 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01% 00:43:53.903 cpu : usr=2.05%, sys=7.78%, ctx=20321, majf=0, minf=2 00:43:53.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.903 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.903 issued rwts: total=20308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:53.903 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66471: Mon Dec 9 09:56:01 2024 00:43:53.903 read: IOPS=4635, BW=18.1MiB/s (19.0MB/s)(60.1MiB/3321msec) 00:43:53.903 slat (usec): min=8, max=11810, avg=19.34, stdev=126.94 00:43:53.903 clat (usec): min=140, max=7020, avg=194.40, stdev=82.06 00:43:53.903 lat (usec): min=163, max=12491, avg=213.74, stdev=153.88 00:43:53.903 clat percentiles (usec): 00:43:53.903 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:43:53.903 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:43:53.903 | 70.00th=[ 196], 80.00th=[ 206], 90.00th=[ 227], 95.00th=[ 253], 00:43:53.903 | 99.00th=[ 351], 99.50th=[ 408], 99.90th=[ 766], 99.95th=[ 1188], 00:43:53.903 | 99.99th=[ 4228] 00:43:53.903 bw ( KiB/s): min=17496, max=20584, per=33.64%, avg=18873.33, stdev=1044.25, samples=6 00:43:53.903 iops : min= 4374, max= 5146, avg=4718.33, stdev=261.06, samples=6 00:43:53.903 lat (usec) : 250=94.62%, 500=5.14%, 750=0.13%, 1000=0.03% 00:43:53.903 lat (msec) : 2=0.05%, 4=0.02%, 10=0.01% 00:43:53.903 cpu : usr=1.84%, sys=7.08%, ctx=15401, majf=0, minf=2 00:43:53.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.903 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.903 issued rwts: total=15396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:53.903 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66472: Mon Dec 9 09:56:01 2024 00:43:53.903 read: IOPS=3306, BW=12.9MiB/s (13.5MB/s)(38.7MiB/2999msec) 00:43:53.903 slat (nsec): min=8408, max=80543, avg=18162.17, stdev=7643.88 00:43:53.903 clat (usec): min=218, max=2476, avg=282.38, stdev=49.81 00:43:53.903 lat (usec): min=232, max=2490, avg=300.55, stdev=51.48 00:43:53.903 clat percentiles (usec): 00:43:53.903 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:43:53.903 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:43:53.903 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 371], 00:43:53.903 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 676], 99.95th=[ 799], 00:43:53.903 | 99.99th=[ 2474] 00:43:53.903 bw ( KiB/s): min=12168, max=13840, per=23.70%, avg=13299.20, stdev=717.97, samples=5 00:43:53.903 iops : min= 3042, max= 3460, avg=3324.80, stdev=179.49, samples=5 00:43:53.903 lat (usec) : 250=12.69%, 500=87.12%, 750=0.10%, 1000=0.05% 00:43:53.903 lat (msec) : 2=0.02%, 4=0.01% 00:43:53.903 cpu : usr=1.17%, sys=5.80%, ctx=9916, majf=0, minf=2 00:43:53.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.903 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.903 issued rwts: total=9915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:53.903 00:43:53.903 Run status group 0 (all jobs): 00:43:53.903 READ: bw=54.8MiB/s (57.5MB/s), 12.9MiB/s-18.9MiB/s (13.5MB/s-19.8MB/s), io=230MiB (242MB), run=2999-4206msec 00:43:53.903 00:43:53.903 Disk stats (read/write): 00:43:53.903 nvme0n1: ios=12400/0, merge=0/0, ticks=3192/0, in_queue=3192, util=95.70% 00:43:53.903 nvme0n2: ios=19794/0, merge=0/0, ticks=3697/0, in_queue=3697, util=96.03% 00:43:53.903 nvme0n3: ios=14587/0, merge=0/0, ticks=2874/0, in_queue=2874, util=96.21% 00:43:53.903 nvme0n4: ios=9537/0, merge=0/0, ticks=2578/0, in_queue=2578, util=96.69% 00:43:53.903 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:53.903 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:54.469 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:54.469 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:54.728 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:54.728 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:55.038 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:55.038 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:55.318 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:55.318 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:55.577 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:55.577 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66419 00:43:55.577 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:55.577 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:55.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:55.577 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:55.577 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:43:55.577 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:55.835 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:55.835 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:55.835 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:55.835 nvmf hotplug test: fio failed as expected 00:43:55.835 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:43:55.835 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:55.835 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:55.835 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:56.093 rmmod nvme_tcp 00:43:56.093 rmmod nvme_fabrics 00:43:56.093 rmmod nvme_keyring 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66038 ']' 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66038 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66038 ']' 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66038 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66038 00:43:56.093 killing process with pid 66038 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66038' 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66038 00:43:56.093 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66038 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:56.351 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:43:56.610 ************************************ 00:43:56.610 END TEST nvmf_fio_target 00:43:56.610 ************************************ 00:43:56.610 00:43:56.610 real 0m20.692s 00:43:56.610 user 1m17.514s 00:43:56.610 sys 0m11.050s 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:56.610 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:56.610 09:56:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:43:56.610 09:56:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:56.610 09:56:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:56.610 09:56:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:56.610 ************************************ 00:43:56.610 START TEST nvmf_bdevio 00:43:56.610 ************************************ 00:43:56.610 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:43:56.610 * Looking for test storage... 00:43:56.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.870 --rc genhtml_branch_coverage=1 00:43:56.870 --rc genhtml_function_coverage=1 00:43:56.870 --rc genhtml_legend=1 00:43:56.870 --rc geninfo_all_blocks=1 00:43:56.870 --rc geninfo_unexecuted_blocks=1 00:43:56.870 00:43:56.870 ' 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.870 --rc genhtml_branch_coverage=1 00:43:56.870 --rc genhtml_function_coverage=1 00:43:56.870 --rc genhtml_legend=1 00:43:56.870 --rc geninfo_all_blocks=1 00:43:56.870 --rc geninfo_unexecuted_blocks=1 00:43:56.870 00:43:56.870 ' 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.870 --rc genhtml_branch_coverage=1 00:43:56.870 --rc genhtml_function_coverage=1 00:43:56.870 --rc genhtml_legend=1 00:43:56.870 --rc geninfo_all_blocks=1 00:43:56.870 --rc geninfo_unexecuted_blocks=1 00:43:56.870 00:43:56.870 ' 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.870 --rc genhtml_branch_coverage=1 00:43:56.870 --rc genhtml_function_coverage=1 00:43:56.870 --rc genhtml_legend=1 00:43:56.870 --rc geninfo_all_blocks=1 00:43:56.870 --rc geninfo_unexecuted_blocks=1 00:43:56.870 00:43:56.870 ' 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.870 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:56.871 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:56.871 Cannot find device "nvmf_init_br" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:56.871 Cannot find device "nvmf_init_br2" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:56.871 Cannot find device "nvmf_tgt_br" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:56.871 Cannot find device "nvmf_tgt_br2" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:56.871 Cannot find device "nvmf_init_br" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:56.871 Cannot find device "nvmf_init_br2" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:56.871 Cannot find device "nvmf_tgt_br" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:56.871 Cannot find device "nvmf_tgt_br2" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:56.871 Cannot find device "nvmf_br" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:56.871 Cannot find device "nvmf_init_if" 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:43:56.871 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:57.129 Cannot find device "nvmf_init_if2" 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:57.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:57.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:57.129 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:57.388 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:57.388 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:43:57.388 00:43:57.388 --- 10.0.0.3 ping statistics --- 00:43:57.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:57.388 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:57.388 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:57.388 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:43:57.388 00:43:57.388 --- 10.0.0.4 ping statistics --- 00:43:57.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:57.388 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:57.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:57.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:43:57.388 00:43:57.388 --- 10.0.0.1 ping statistics --- 00:43:57.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:57.388 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:57.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:57.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:43:57.388 00:43:57.388 --- 10.0.0.2 ping statistics --- 00:43:57.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:57.388 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66797 00:43:57.388 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:43:57.389 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66797 00:43:57.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:57.389 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66797 ']' 00:43:57.389 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:57.389 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:57.389 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:57.389 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:57.389 09:56:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.389 [2024-12-09 09:56:04.763188] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:43:57.389 [2024-12-09 09:56:04.763882] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:57.648 [2024-12-09 09:56:04.913751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:57.648 [2024-12-09 09:56:04.948528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:57.648 [2024-12-09 09:56:04.949002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:57.648 [2024-12-09 09:56:04.949025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:57.648 [2024-12-09 09:56:04.949035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:57.648 [2024-12-09 09:56:04.949042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:57.648 [2024-12-09 09:56:04.949861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:43:57.648 [2024-12-09 09:56:04.950142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:43:57.648 [2024-12-09 09:56:04.950214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:43:57.648 [2024-12-09 09:56:04.950218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:57.648 [2024-12-09 09:56:04.983112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.648 [2024-12-09 09:56:05.073906] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.648 Malloc0 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.648 [2024-12-09 09:56:05.133071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:57.648 { 00:43:57.648 "params": { 00:43:57.648 "name": "Nvme$subsystem", 00:43:57.648 "trtype": "$TEST_TRANSPORT", 00:43:57.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:57.648 "adrfam": "ipv4", 00:43:57.648 "trsvcid": "$NVMF_PORT", 00:43:57.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:57.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:57.648 "hdgst": ${hdgst:-false}, 00:43:57.648 "ddgst": ${ddgst:-false} 00:43:57.648 }, 00:43:57.648 "method": "bdev_nvme_attach_controller" 00:43:57.648 } 00:43:57.648 EOF 00:43:57.648 )") 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:43:57.648 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:57.648 "params": { 00:43:57.648 "name": "Nvme1", 00:43:57.648 "trtype": "tcp", 00:43:57.648 "traddr": "10.0.0.3", 00:43:57.648 "adrfam": "ipv4", 00:43:57.648 "trsvcid": "4420", 00:43:57.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:57.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:57.648 "hdgst": false, 00:43:57.648 "ddgst": false 00:43:57.648 }, 00:43:57.648 "method": "bdev_nvme_attach_controller" 00:43:57.648 }' 00:43:57.906 [2024-12-09 09:56:05.181393] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:43:57.906 [2024-12-09 09:56:05.182107] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66827 ] 00:43:57.906 [2024-12-09 09:56:05.335236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:57.906 [2024-12-09 09:56:05.378928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:57.906 [2024-12-09 09:56:05.379019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:57.906 [2024-12-09 09:56:05.379023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:58.164 [2024-12-09 09:56:05.423090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:58.164 I/O targets: 00:43:58.164 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:58.164 00:43:58.164 00:43:58.164 CUnit - A unit testing framework for C - Version 2.1-3 00:43:58.164 http://cunit.sourceforge.net/ 00:43:58.164 00:43:58.164 00:43:58.164 Suite: bdevio tests on: Nvme1n1 00:43:58.164 Test: blockdev write read block ...passed 00:43:58.164 Test: blockdev write zeroes read block ...passed 00:43:58.164 Test: blockdev write zeroes read no split ...passed 00:43:58.164 Test: blockdev write zeroes read split ...passed 00:43:58.164 Test: blockdev write zeroes read split partial ...passed 00:43:58.164 Test: blockdev reset ...[2024-12-09 09:56:05.567612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:58.164 [2024-12-09 09:56:05.568147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x509b80 (9): Bad file descriptor 00:43:58.164 passed 00:43:58.164 Test: blockdev write read 8 blocks ...[2024-12-09 09:56:05.579388] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:43:58.164 passed 00:43:58.164 Test: blockdev write read size > 128k ...passed 00:43:58.164 Test: blockdev write read invalid size ...passed 00:43:58.164 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:58.164 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:58.164 Test: blockdev write read max offset ...passed 00:43:58.164 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:58.164 Test: blockdev writev readv 8 blocks ...passed 00:43:58.164 Test: blockdev writev readv 30 x 1block ...passed 00:43:58.164 Test: blockdev writev readv block ...passed 00:43:58.164 Test: blockdev writev readv size > 128k ...passed 00:43:58.164 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:58.164 Test: blockdev comparev and writev ...[2024-12-09 09:56:05.588129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.164 [2024-12-09 09:56:05.588184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:58.164 [2024-12-09 09:56:05.588211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.164 [2024-12-09 09:56:05.588225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:58.165 [2024-12-09 09:56:05.588538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.165 [2024-12-09 09:56:05.588560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:58.165 [2024-12-09 09:56:05.588580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.165 [2024-12-09 09:56:05.588592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:58.165 [2024-12-09 09:56:05.588952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.165 [2024-12-09 09:56:05.589001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:58.165 [2024-12-09 09:56:05.589023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.165 [2024-12-09 09:56:05.589036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:58.165 [2024-12-09 09:56:05.589330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.165 [2024-12-09 09:56:05.589366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:58.165 [2024-12-09 09:56:05.589388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.165 [2024-12-09 09:56:05.589401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:58.165 passed 00:43:58.165 Test: blockdev nvme passthru rw ...passed 00:43:58.165 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:56:05.590501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:58.165 [2024-12-09 09:56:05.590532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:58.165 [2024-12-09 09:56:05.590648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:58.165 [2024-12-09 09:56:05.590673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:58.165 [2024-12-09 09:56:05.590783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOpassed 00:43:58.165 Test: blockdev nvme admin passthru ...CK OFFSET 0x0 len:0x0 00:43:58.165 [2024-12-09 09:56:05.590874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:58.165 [2024-12-09 09:56:05.591009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:58.165 [2024-12-09 09:56:05.591036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:58.165 passed 00:43:58.165 Test: blockdev copy ...passed 00:43:58.165 00:43:58.165 Run Summary: Type Total Ran Passed Failed Inactive 00:43:58.165 suites 1 1 n/a 0 0 00:43:58.165 tests 23 23 23 0 0 00:43:58.165 asserts 152 152 152 0 n/a 00:43:58.165 00:43:58.165 Elapsed time = 0.156 seconds 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:58.422 rmmod nvme_tcp 00:43:58.422 rmmod nvme_fabrics 00:43:58.422 rmmod nvme_keyring 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66797 ']' 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66797 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66797 ']' 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66797 00:43:58.422 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:43:58.680 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:58.680 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66797 00:43:58.680 killing process with pid 66797 00:43:58.680 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:43:58.680 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:43:58.680 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66797' 00:43:58.680 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66797 00:43:58.680 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66797 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:43:58.938 00:43:58.938 real 0m2.392s 00:43:58.938 user 0m5.884s 00:43:58.938 sys 0m0.784s 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:58.938 09:56:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:58.938 ************************************ 00:43:58.938 END TEST nvmf_bdevio 00:43:58.938 ************************************ 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:59.196 ************************************ 00:43:59.196 END TEST nvmf_target_core 00:43:59.196 ************************************ 00:43:59.196 00:43:59.196 real 2m35.074s 00:43:59.196 user 6m48.072s 00:43:59.196 sys 0m53.109s 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:43:59.196 09:56:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:43:59.196 09:56:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:59.196 09:56:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:59.196 09:56:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.196 ************************************ 00:43:59.196 START TEST nvmf_target_extra 00:43:59.196 ************************************ 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:43:59.196 * Looking for test storage... 00:43:59.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:59.196 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:59.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.197 --rc genhtml_branch_coverage=1 00:43:59.197 --rc genhtml_function_coverage=1 00:43:59.197 --rc genhtml_legend=1 00:43:59.197 --rc geninfo_all_blocks=1 00:43:59.197 --rc geninfo_unexecuted_blocks=1 00:43:59.197 00:43:59.197 ' 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:59.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.197 --rc genhtml_branch_coverage=1 00:43:59.197 --rc genhtml_function_coverage=1 00:43:59.197 --rc genhtml_legend=1 00:43:59.197 --rc geninfo_all_blocks=1 00:43:59.197 --rc geninfo_unexecuted_blocks=1 00:43:59.197 00:43:59.197 ' 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:59.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.197 --rc genhtml_branch_coverage=1 00:43:59.197 --rc genhtml_function_coverage=1 00:43:59.197 --rc genhtml_legend=1 00:43:59.197 --rc geninfo_all_blocks=1 00:43:59.197 --rc geninfo_unexecuted_blocks=1 00:43:59.197 00:43:59.197 ' 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:59.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.197 --rc genhtml_branch_coverage=1 00:43:59.197 --rc genhtml_function_coverage=1 00:43:59.197 --rc genhtml_legend=1 00:43:59.197 --rc geninfo_all_blocks=1 00:43:59.197 --rc geninfo_unexecuted_blocks=1 00:43:59.197 00:43:59.197 ' 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:59.197 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:59.456 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:43:59.456 ************************************ 00:43:59.456 START TEST nvmf_auth_target 00:43:59.456 ************************************ 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:43:59.456 * Looking for test storage... 00:43:59.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.456 --rc genhtml_branch_coverage=1 00:43:59.456 --rc genhtml_function_coverage=1 00:43:59.456 --rc genhtml_legend=1 00:43:59.456 --rc geninfo_all_blocks=1 00:43:59.456 --rc geninfo_unexecuted_blocks=1 00:43:59.456 00:43:59.456 ' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.456 --rc genhtml_branch_coverage=1 00:43:59.456 --rc genhtml_function_coverage=1 00:43:59.456 --rc genhtml_legend=1 00:43:59.456 --rc geninfo_all_blocks=1 00:43:59.456 --rc geninfo_unexecuted_blocks=1 00:43:59.456 00:43:59.456 ' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.456 --rc genhtml_branch_coverage=1 00:43:59.456 --rc genhtml_function_coverage=1 00:43:59.456 --rc genhtml_legend=1 00:43:59.456 --rc geninfo_all_blocks=1 00:43:59.456 --rc geninfo_unexecuted_blocks=1 00:43:59.456 00:43:59.456 ' 00:43:59.456 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.456 --rc genhtml_branch_coverage=1 00:43:59.456 --rc genhtml_function_coverage=1 00:43:59.456 --rc genhtml_legend=1 00:43:59.456 --rc geninfo_all_blocks=1 00:43:59.456 --rc geninfo_unexecuted_blocks=1 00:43:59.456 00:43:59.456 ' 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:59.457 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:59.457 Cannot find device "nvmf_init_br" 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:43:59.457 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:59.457 Cannot find device "nvmf_init_br2" 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:59.715 Cannot find device "nvmf_tgt_br" 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:59.715 Cannot find device "nvmf_tgt_br2" 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:59.715 Cannot find device "nvmf_init_br" 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:59.715 Cannot find device "nvmf_init_br2" 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:43:59.715 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:59.715 Cannot find device "nvmf_tgt_br" 00:43:59.715 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:43:59.715 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:59.715 Cannot find device "nvmf_tgt_br2" 00:43:59.715 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:43:59.715 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:59.715 Cannot find device "nvmf_br" 00:43:59.715 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:43:59.715 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:59.715 Cannot find device "nvmf_init_if" 00:43:59.715 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:43:59.715 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:59.715 Cannot find device "nvmf_init_if2" 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:59.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:59.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:59.716 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:59.973 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:59.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:59.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:43:59.974 00:43:59.974 --- 10.0.0.3 ping statistics --- 00:43:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:59.974 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:59.974 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:59.974 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:43:59.974 00:43:59.974 --- 10.0.0.4 ping statistics --- 00:43:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:59.974 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:59.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:59.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:43:59.974 00:43:59.974 --- 10.0.0.1 ping statistics --- 00:43:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:59.974 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:59.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:59.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:43:59.974 00:43:59.974 --- 10.0.0.2 ping statistics --- 00:43:59.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:59.974 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67109 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67109 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67109 ']' 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:59.974 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:00.231 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:00.231 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:44:00.231 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:00.231 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:00.231 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67128 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f3e947d8d57410ea0ccda817a0ee718811f58c2dd1bfca86 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Q6x 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f3e947d8d57410ea0ccda817a0ee718811f58c2dd1bfca86 0 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f3e947d8d57410ea0ccda817a0ee718811f58c2dd1bfca86 0 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f3e947d8d57410ea0ccda817a0ee718811f58c2dd1bfca86 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Q6x 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Q6x 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Q6x 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=551cd4f72fa177f8d3e514163b129f21235d9e825e626af195db518cbf3820f7 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.sXb 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 551cd4f72fa177f8d3e514163b129f21235d9e825e626af195db518cbf3820f7 3 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 551cd4f72fa177f8d3e514163b129f21235d9e825e626af195db518cbf3820f7 3 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=551cd4f72fa177f8d3e514163b129f21235d9e825e626af195db518cbf3820f7 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.sXb 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.sXb 00:44:00.490 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.sXb 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=534d41d818c3aef3727a0162cd4ab67f 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Q8s 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 534d41d818c3aef3727a0162cd4ab67f 1 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 534d41d818c3aef3727a0162cd4ab67f 1 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=534d41d818c3aef3727a0162cd4ab67f 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Q8s 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Q8s 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Q8s 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0599c14378ae2f2a88238352bda62a8a8d529069a503d025 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gV0 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0599c14378ae2f2a88238352bda62a8a8d529069a503d025 2 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0599c14378ae2f2a88238352bda62a8a8d529069a503d025 2 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0599c14378ae2f2a88238352bda62a8a8d529069a503d025 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:44:00.491 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:44:00.750 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gV0 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gV0 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.gV0 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bab7d1ee13f44e987b0a08d14410f604917b46e15734ce89 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.p06 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bab7d1ee13f44e987b0a08d14410f604917b46e15734ce89 2 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bab7d1ee13f44e987b0a08d14410f604917b46e15734ce89 2 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bab7d1ee13f44e987b0a08d14410f604917b46e15734ce89 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.p06 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.p06 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.p06 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cdee28e8a54f462c9fa299a919bd4b89 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4Az 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cdee28e8a54f462c9fa299a919bd4b89 1 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cdee28e8a54f462c9fa299a919bd4b89 1 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cdee28e8a54f462c9fa299a919bd4b89 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4Az 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4Az 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.4Az 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4ddd8f6944d6985bfbc0319a6e04620480b1f66125775a6debac294b93c46176 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Nip 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4ddd8f6944d6985bfbc0319a6e04620480b1f66125775a6debac294b93c46176 3 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4ddd8f6944d6985bfbc0319a6e04620480b1f66125775a6debac294b93c46176 3 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4ddd8f6944d6985bfbc0319a6e04620480b1f66125775a6debac294b93c46176 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Nip 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Nip 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Nip 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:44:00.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67109 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67109 ']' 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:00.750 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:44:01.009 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:01.009 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:44:01.009 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67128 /var/tmp/host.sock 00:44:01.009 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67128 ']' 00:44:01.009 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:44:01.009 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:01.009 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:44:01.009 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:01.009 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.294 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:01.294 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:44:01.294 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:44:01.294 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.294 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.552 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.552 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:44:01.552 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Q6x 00:44:01.552 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.552 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.552 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.552 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Q6x 00:44:01.552 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Q6x 00:44:01.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.sXb ]] 00:44:01.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sXb 00:44:01.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sXb 00:44:01.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sXb 00:44:02.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:44:02.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Q8s 00:44:02.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Q8s 00:44:02.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Q8s 00:44:02.328 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.gV0 ]] 00:44:02.328 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gV0 00:44:02.328 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.328 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.328 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.328 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gV0 00:44:02.328 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gV0 00:44:02.586 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:44:02.586 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.p06 00:44:02.586 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.586 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.586 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.586 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.p06 00:44:02.586 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.p06 00:44:02.844 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.4Az ]] 00:44:02.844 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4Az 00:44:02.844 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.844 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.844 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.844 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4Az 00:44:02.844 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4Az 00:44:03.101 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:44:03.102 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Nip 00:44:03.102 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.102 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:03.102 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.102 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Nip 00:44:03.102 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Nip 00:44:03.667 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:44:03.667 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:44:03.667 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:03.667 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:03.667 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:44:03.667 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:03.925 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:04.184 00:44:04.184 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:04.184 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:04.184 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:04.442 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:04.442 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:04.442 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.442 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:04.442 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:04.442 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:04.442 { 00:44:04.442 "cntlid": 1, 00:44:04.442 "qid": 0, 00:44:04.442 "state": "enabled", 00:44:04.442 "thread": "nvmf_tgt_poll_group_000", 00:44:04.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:04.442 "listen_address": { 00:44:04.442 "trtype": "TCP", 00:44:04.442 "adrfam": "IPv4", 00:44:04.442 "traddr": "10.0.0.3", 00:44:04.442 "trsvcid": "4420" 00:44:04.442 }, 00:44:04.442 "peer_address": { 00:44:04.442 "trtype": "TCP", 00:44:04.442 "adrfam": "IPv4", 00:44:04.442 "traddr": "10.0.0.1", 00:44:04.442 "trsvcid": "53810" 00:44:04.442 }, 00:44:04.442 "auth": { 00:44:04.442 "state": "completed", 00:44:04.442 "digest": "sha256", 00:44:04.442 "dhgroup": "null" 00:44:04.442 } 00:44:04.442 } 00:44:04.442 ]' 00:44:04.442 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:04.701 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:04.701 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:04.701 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:04.701 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:04.701 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:04.701 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:04.701 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:04.958 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:04.958 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:10.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:10.238 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:10.238 00:44:10.496 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:10.496 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:10.496 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:10.754 { 00:44:10.754 "cntlid": 3, 00:44:10.754 "qid": 0, 00:44:10.754 "state": "enabled", 00:44:10.754 "thread": "nvmf_tgt_poll_group_000", 00:44:10.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:10.754 "listen_address": { 00:44:10.754 "trtype": "TCP", 00:44:10.754 "adrfam": "IPv4", 00:44:10.754 "traddr": "10.0.0.3", 00:44:10.754 "trsvcid": "4420" 00:44:10.754 }, 00:44:10.754 "peer_address": { 00:44:10.754 "trtype": "TCP", 00:44:10.754 "adrfam": "IPv4", 00:44:10.754 "traddr": "10.0.0.1", 00:44:10.754 "trsvcid": "53416" 00:44:10.754 }, 00:44:10.754 "auth": { 00:44:10.754 "state": "completed", 00:44:10.754 "digest": "sha256", 00:44:10.754 "dhgroup": "null" 00:44:10.754 } 00:44:10.754 } 00:44:10.754 ]' 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:10.754 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:11.013 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:11.013 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:11.013 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:11.271 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:11.271 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:12.205 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:12.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:12.205 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:12.205 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.205 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:12.205 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.205 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:12.205 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:44:12.205 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:12.463 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:12.721 00:44:12.721 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:12.721 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:12.721 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:13.287 { 00:44:13.287 "cntlid": 5, 00:44:13.287 "qid": 0, 00:44:13.287 "state": "enabled", 00:44:13.287 "thread": "nvmf_tgt_poll_group_000", 00:44:13.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:13.287 "listen_address": { 00:44:13.287 "trtype": "TCP", 00:44:13.287 "adrfam": "IPv4", 00:44:13.287 "traddr": "10.0.0.3", 00:44:13.287 "trsvcid": "4420" 00:44:13.287 }, 00:44:13.287 "peer_address": { 00:44:13.287 "trtype": "TCP", 00:44:13.287 "adrfam": "IPv4", 00:44:13.287 "traddr": "10.0.0.1", 00:44:13.287 "trsvcid": "53434" 00:44:13.287 }, 00:44:13.287 "auth": { 00:44:13.287 "state": "completed", 00:44:13.287 "digest": "sha256", 00:44:13.287 "dhgroup": "null" 00:44:13.287 } 00:44:13.287 } 00:44:13.287 ]' 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:13.287 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:13.545 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:13.545 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:14.479 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:14.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:14.479 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:14.479 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.479 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:14.479 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.479 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:14.479 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:44:14.479 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:14.737 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:14.995 00:44:14.995 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:14.995 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:14.995 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:15.562 { 00:44:15.562 "cntlid": 7, 00:44:15.562 "qid": 0, 00:44:15.562 "state": "enabled", 00:44:15.562 "thread": "nvmf_tgt_poll_group_000", 00:44:15.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:15.562 "listen_address": { 00:44:15.562 "trtype": "TCP", 00:44:15.562 "adrfam": "IPv4", 00:44:15.562 "traddr": "10.0.0.3", 00:44:15.562 "trsvcid": "4420" 00:44:15.562 }, 00:44:15.562 "peer_address": { 00:44:15.562 "trtype": "TCP", 00:44:15.562 "adrfam": "IPv4", 00:44:15.562 "traddr": "10.0.0.1", 00:44:15.562 "trsvcid": "53456" 00:44:15.562 }, 00:44:15.562 "auth": { 00:44:15.562 "state": "completed", 00:44:15.562 "digest": "sha256", 00:44:15.562 "dhgroup": "null" 00:44:15.562 } 00:44:15.562 } 00:44:15.562 ]' 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:15.562 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:15.820 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:15.820 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:16.754 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:16.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:16.754 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:16.754 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.754 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:16.754 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.754 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:16.754 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:16.754 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:44:16.754 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:44:17.027 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:44:17.027 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:17.027 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:17.027 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:17.027 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:17.027 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:17.027 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:17.028 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.028 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:17.028 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.028 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:17.028 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:17.028 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:17.317 00:44:17.317 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:17.317 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:17.317 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:17.882 { 00:44:17.882 "cntlid": 9, 00:44:17.882 "qid": 0, 00:44:17.882 "state": "enabled", 00:44:17.882 "thread": "nvmf_tgt_poll_group_000", 00:44:17.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:17.882 "listen_address": { 00:44:17.882 "trtype": "TCP", 00:44:17.882 "adrfam": "IPv4", 00:44:17.882 "traddr": "10.0.0.3", 00:44:17.882 "trsvcid": "4420" 00:44:17.882 }, 00:44:17.882 "peer_address": { 00:44:17.882 "trtype": "TCP", 00:44:17.882 "adrfam": "IPv4", 00:44:17.882 "traddr": "10.0.0.1", 00:44:17.882 "trsvcid": "53486" 00:44:17.882 }, 00:44:17.882 "auth": { 00:44:17.882 "state": "completed", 00:44:17.882 "digest": "sha256", 00:44:17.882 "dhgroup": "ffdhe2048" 00:44:17.882 } 00:44:17.882 } 00:44:17.882 ]' 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:17.882 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:17.883 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:17.883 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:17.883 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:18.141 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:18.141 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:19.074 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:19.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:19.074 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:19.074 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.074 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:19.074 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.074 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:19.074 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:44:19.074 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:19.333 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:19.590 00:44:19.848 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:19.848 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:19.848 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:20.106 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:20.106 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:20.106 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.106 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:20.106 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.106 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:20.106 { 00:44:20.106 "cntlid": 11, 00:44:20.106 "qid": 0, 00:44:20.106 "state": "enabled", 00:44:20.106 "thread": "nvmf_tgt_poll_group_000", 00:44:20.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:20.106 "listen_address": { 00:44:20.106 "trtype": "TCP", 00:44:20.106 "adrfam": "IPv4", 00:44:20.106 "traddr": "10.0.0.3", 00:44:20.106 "trsvcid": "4420" 00:44:20.106 }, 00:44:20.106 "peer_address": { 00:44:20.106 "trtype": "TCP", 00:44:20.106 "adrfam": "IPv4", 00:44:20.106 "traddr": "10.0.0.1", 00:44:20.106 "trsvcid": "43548" 00:44:20.106 }, 00:44:20.106 "auth": { 00:44:20.106 "state": "completed", 00:44:20.106 "digest": "sha256", 00:44:20.106 "dhgroup": "ffdhe2048" 00:44:20.106 } 00:44:20.106 } 00:44:20.106 ]' 00:44:20.106 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:20.107 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:20.107 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:20.107 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:20.107 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:20.365 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:20.365 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:20.365 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:20.622 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:20.623 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:21.558 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:21.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:21.558 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:21.558 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.558 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:21.558 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.558 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:21.558 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:44:21.558 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:21.817 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:22.075 00:44:22.075 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:22.075 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:22.075 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:22.333 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:22.333 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:22.333 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.333 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:22.333 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.333 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:22.333 { 00:44:22.333 "cntlid": 13, 00:44:22.333 "qid": 0, 00:44:22.333 "state": "enabled", 00:44:22.333 "thread": "nvmf_tgt_poll_group_000", 00:44:22.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:22.333 "listen_address": { 00:44:22.333 "trtype": "TCP", 00:44:22.333 "adrfam": "IPv4", 00:44:22.333 "traddr": "10.0.0.3", 00:44:22.333 "trsvcid": "4420" 00:44:22.333 }, 00:44:22.333 "peer_address": { 00:44:22.333 "trtype": "TCP", 00:44:22.333 "adrfam": "IPv4", 00:44:22.333 "traddr": "10.0.0.1", 00:44:22.333 "trsvcid": "43576" 00:44:22.333 }, 00:44:22.333 "auth": { 00:44:22.333 "state": "completed", 00:44:22.333 "digest": "sha256", 00:44:22.333 "dhgroup": "ffdhe2048" 00:44:22.333 } 00:44:22.333 } 00:44:22.333 ]' 00:44:22.333 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:22.333 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:22.333 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:22.591 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:22.591 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:22.591 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:22.591 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:22.592 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:22.850 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:22.850 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:23.785 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:23.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:23.785 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:23.785 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.785 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:23.785 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.785 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:23.785 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:44:23.785 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:24.043 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:24.301 00:44:24.301 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:24.301 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:24.301 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:24.559 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:24.559 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:24.559 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.559 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:24.559 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.559 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:24.559 { 00:44:24.559 "cntlid": 15, 00:44:24.559 "qid": 0, 00:44:24.559 "state": "enabled", 00:44:24.559 "thread": "nvmf_tgt_poll_group_000", 00:44:24.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:24.559 "listen_address": { 00:44:24.559 "trtype": "TCP", 00:44:24.559 "adrfam": "IPv4", 00:44:24.559 "traddr": "10.0.0.3", 00:44:24.559 "trsvcid": "4420" 00:44:24.559 }, 00:44:24.559 "peer_address": { 00:44:24.559 "trtype": "TCP", 00:44:24.560 "adrfam": "IPv4", 00:44:24.560 "traddr": "10.0.0.1", 00:44:24.560 "trsvcid": "43602" 00:44:24.560 }, 00:44:24.560 "auth": { 00:44:24.560 "state": "completed", 00:44:24.560 "digest": "sha256", 00:44:24.560 "dhgroup": "ffdhe2048" 00:44:24.560 } 00:44:24.560 } 00:44:24.560 ]' 00:44:24.560 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:24.817 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:24.817 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:24.817 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:24.817 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:24.817 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:24.817 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:24.817 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:25.075 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:25.075 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:26.008 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:26.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:26.008 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:26.008 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.008 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:26.008 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.008 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:26.008 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:26.008 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:44:26.008 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:26.266 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:26.832 00:44:26.832 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:26.832 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:26.832 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:27.091 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:27.091 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:27.091 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.091 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:27.091 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.091 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:27.091 { 00:44:27.091 "cntlid": 17, 00:44:27.091 "qid": 0, 00:44:27.091 "state": "enabled", 00:44:27.091 "thread": "nvmf_tgt_poll_group_000", 00:44:27.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:27.091 "listen_address": { 00:44:27.091 "trtype": "TCP", 00:44:27.091 "adrfam": "IPv4", 00:44:27.091 "traddr": "10.0.0.3", 00:44:27.091 "trsvcid": "4420" 00:44:27.091 }, 00:44:27.091 "peer_address": { 00:44:27.091 "trtype": "TCP", 00:44:27.091 "adrfam": "IPv4", 00:44:27.091 "traddr": "10.0.0.1", 00:44:27.091 "trsvcid": "43634" 00:44:27.091 }, 00:44:27.091 "auth": { 00:44:27.091 "state": "completed", 00:44:27.091 "digest": "sha256", 00:44:27.091 "dhgroup": "ffdhe3072" 00:44:27.091 } 00:44:27.091 } 00:44:27.091 ]' 00:44:27.091 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:27.091 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:27.091 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:27.349 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:27.349 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:27.349 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:27.349 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:27.349 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:27.607 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:27.607 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:28.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:28.565 09:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:29.132 00:44:29.132 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:29.132 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:29.132 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:29.390 { 00:44:29.390 "cntlid": 19, 00:44:29.390 "qid": 0, 00:44:29.390 "state": "enabled", 00:44:29.390 "thread": "nvmf_tgt_poll_group_000", 00:44:29.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:29.390 "listen_address": { 00:44:29.390 "trtype": "TCP", 00:44:29.390 "adrfam": "IPv4", 00:44:29.390 "traddr": "10.0.0.3", 00:44:29.390 "trsvcid": "4420" 00:44:29.390 }, 00:44:29.390 "peer_address": { 00:44:29.390 "trtype": "TCP", 00:44:29.390 "adrfam": "IPv4", 00:44:29.390 "traddr": "10.0.0.1", 00:44:29.390 "trsvcid": "43664" 00:44:29.390 }, 00:44:29.390 "auth": { 00:44:29.390 "state": "completed", 00:44:29.390 "digest": "sha256", 00:44:29.390 "dhgroup": "ffdhe3072" 00:44:29.390 } 00:44:29.390 } 00:44:29.390 ]' 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:29.390 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:29.391 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:29.391 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:29.391 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:29.391 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:29.955 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:29.955 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:30.520 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:30.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:30.520 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:30.520 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:30.520 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:30.778 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.778 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:30.778 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:44:30.778 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:31.036 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:31.294 00:44:31.294 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:31.294 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:31.294 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:31.880 { 00:44:31.880 "cntlid": 21, 00:44:31.880 "qid": 0, 00:44:31.880 "state": "enabled", 00:44:31.880 "thread": "nvmf_tgt_poll_group_000", 00:44:31.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:31.880 "listen_address": { 00:44:31.880 "trtype": "TCP", 00:44:31.880 "adrfam": "IPv4", 00:44:31.880 "traddr": "10.0.0.3", 00:44:31.880 "trsvcid": "4420" 00:44:31.880 }, 00:44:31.880 "peer_address": { 00:44:31.880 "trtype": "TCP", 00:44:31.880 "adrfam": "IPv4", 00:44:31.880 "traddr": "10.0.0.1", 00:44:31.880 "trsvcid": "59724" 00:44:31.880 }, 00:44:31.880 "auth": { 00:44:31.880 "state": "completed", 00:44:31.880 "digest": "sha256", 00:44:31.880 "dhgroup": "ffdhe3072" 00:44:31.880 } 00:44:31.880 } 00:44:31.880 ]' 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:31.880 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:32.447 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:32.447 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:33.013 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:33.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:33.013 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:33.013 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.013 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:33.013 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.013 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:33.013 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:44:33.013 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:33.272 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:33.840 00:44:33.840 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:33.840 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:33.840 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:34.099 { 00:44:34.099 "cntlid": 23, 00:44:34.099 "qid": 0, 00:44:34.099 "state": "enabled", 00:44:34.099 "thread": "nvmf_tgt_poll_group_000", 00:44:34.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:34.099 "listen_address": { 00:44:34.099 "trtype": "TCP", 00:44:34.099 "adrfam": "IPv4", 00:44:34.099 "traddr": "10.0.0.3", 00:44:34.099 "trsvcid": "4420" 00:44:34.099 }, 00:44:34.099 "peer_address": { 00:44:34.099 "trtype": "TCP", 00:44:34.099 "adrfam": "IPv4", 00:44:34.099 "traddr": "10.0.0.1", 00:44:34.099 "trsvcid": "59748" 00:44:34.099 }, 00:44:34.099 "auth": { 00:44:34.099 "state": "completed", 00:44:34.099 "digest": "sha256", 00:44:34.099 "dhgroup": "ffdhe3072" 00:44:34.099 } 00:44:34.099 } 00:44:34.099 ]' 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:34.099 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:34.358 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:34.358 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:34.358 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:34.616 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:34.616 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:35.551 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:35.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:35.551 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:35.551 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.551 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:35.551 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.551 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:35.551 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:35.551 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:44:35.551 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:35.809 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:36.373 00:44:36.373 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:36.373 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:36.373 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:36.631 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:36.631 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:36.631 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.631 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:36.631 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.631 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:36.631 { 00:44:36.631 "cntlid": 25, 00:44:36.631 "qid": 0, 00:44:36.631 "state": "enabled", 00:44:36.631 "thread": "nvmf_tgt_poll_group_000", 00:44:36.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:36.631 "listen_address": { 00:44:36.631 "trtype": "TCP", 00:44:36.631 "adrfam": "IPv4", 00:44:36.631 "traddr": "10.0.0.3", 00:44:36.631 "trsvcid": "4420" 00:44:36.631 }, 00:44:36.631 "peer_address": { 00:44:36.631 "trtype": "TCP", 00:44:36.631 "adrfam": "IPv4", 00:44:36.631 "traddr": "10.0.0.1", 00:44:36.631 "trsvcid": "59784" 00:44:36.631 }, 00:44:36.631 "auth": { 00:44:36.631 "state": "completed", 00:44:36.631 "digest": "sha256", 00:44:36.631 "dhgroup": "ffdhe4096" 00:44:36.631 } 00:44:36.631 } 00:44:36.631 ]' 00:44:36.631 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:36.631 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:36.631 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:36.631 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:36.631 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:36.889 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:36.889 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:36.889 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:37.146 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:37.146 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:38.078 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:38.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:38.078 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:38.078 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.078 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:38.078 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.078 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:38.078 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:44:38.078 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.336 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.901 00:44:38.901 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:38.901 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:38.901 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:39.158 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:39.158 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:39.158 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:39.158 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:39.158 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.158 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:39.158 { 00:44:39.158 "cntlid": 27, 00:44:39.158 "qid": 0, 00:44:39.158 "state": "enabled", 00:44:39.158 "thread": "nvmf_tgt_poll_group_000", 00:44:39.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:39.158 "listen_address": { 00:44:39.158 "trtype": "TCP", 00:44:39.158 "adrfam": "IPv4", 00:44:39.158 "traddr": "10.0.0.3", 00:44:39.158 "trsvcid": "4420" 00:44:39.158 }, 00:44:39.158 "peer_address": { 00:44:39.158 "trtype": "TCP", 00:44:39.158 "adrfam": "IPv4", 00:44:39.158 "traddr": "10.0.0.1", 00:44:39.158 "trsvcid": "59814" 00:44:39.158 }, 00:44:39.158 "auth": { 00:44:39.158 "state": "completed", 00:44:39.158 "digest": "sha256", 00:44:39.158 "dhgroup": "ffdhe4096" 00:44:39.158 } 00:44:39.158 } 00:44:39.158 ]' 00:44:39.158 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:39.158 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:39.159 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:39.416 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:39.416 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:39.416 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:39.416 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:39.416 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:39.981 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:39.981 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:40.548 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:40.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:40.548 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:40.548 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.548 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:40.548 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.548 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:40.548 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:44:40.548 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:40.806 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:41.374 00:44:41.374 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:41.374 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:41.374 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:41.633 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:41.633 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:41.633 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.633 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:41.633 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.633 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:41.633 { 00:44:41.633 "cntlid": 29, 00:44:41.633 "qid": 0, 00:44:41.633 "state": "enabled", 00:44:41.633 "thread": "nvmf_tgt_poll_group_000", 00:44:41.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:41.633 "listen_address": { 00:44:41.633 "trtype": "TCP", 00:44:41.633 "adrfam": "IPv4", 00:44:41.633 "traddr": "10.0.0.3", 00:44:41.633 "trsvcid": "4420" 00:44:41.633 }, 00:44:41.633 "peer_address": { 00:44:41.633 "trtype": "TCP", 00:44:41.633 "adrfam": "IPv4", 00:44:41.633 "traddr": "10.0.0.1", 00:44:41.633 "trsvcid": "37376" 00:44:41.633 }, 00:44:41.633 "auth": { 00:44:41.633 "state": "completed", 00:44:41.633 "digest": "sha256", 00:44:41.633 "dhgroup": "ffdhe4096" 00:44:41.633 } 00:44:41.633 } 00:44:41.633 ]' 00:44:41.633 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:41.633 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:41.633 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:41.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:41.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:41.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:41.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:41.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:41.892 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:41.892 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:42.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:42.828 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:43.396 00:44:43.396 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:43.396 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:43.396 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:43.655 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:43.655 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:43.655 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.655 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:43.655 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:43.655 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:43.655 { 00:44:43.655 "cntlid": 31, 00:44:43.655 "qid": 0, 00:44:43.655 "state": "enabled", 00:44:43.655 "thread": "nvmf_tgt_poll_group_000", 00:44:43.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:43.655 "listen_address": { 00:44:43.655 "trtype": "TCP", 00:44:43.655 "adrfam": "IPv4", 00:44:43.655 "traddr": "10.0.0.3", 00:44:43.655 "trsvcid": "4420" 00:44:43.655 }, 00:44:43.655 "peer_address": { 00:44:43.655 "trtype": "TCP", 00:44:43.655 "adrfam": "IPv4", 00:44:43.655 "traddr": "10.0.0.1", 00:44:43.655 "trsvcid": "37406" 00:44:43.655 }, 00:44:43.655 "auth": { 00:44:43.655 "state": "completed", 00:44:43.655 "digest": "sha256", 00:44:43.655 "dhgroup": "ffdhe4096" 00:44:43.655 } 00:44:43.655 } 00:44:43.655 ]' 00:44:43.655 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:43.655 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:43.655 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:43.655 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:43.655 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:43.655 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:43.655 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:43.655 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:44.222 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:44.222 09:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:44.788 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:44.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:44.788 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:44.788 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.788 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:44.788 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.788 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:44.788 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:44.788 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:44.788 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:45.048 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:45.616 00:44:45.616 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:45.616 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:45.616 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:45.875 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:45.875 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:45.875 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.875 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:45.875 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.875 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:45.875 { 00:44:45.875 "cntlid": 33, 00:44:45.875 "qid": 0, 00:44:45.875 "state": "enabled", 00:44:45.875 "thread": "nvmf_tgt_poll_group_000", 00:44:45.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:45.875 "listen_address": { 00:44:45.875 "trtype": "TCP", 00:44:45.875 "adrfam": "IPv4", 00:44:45.875 "traddr": "10.0.0.3", 00:44:45.875 "trsvcid": "4420" 00:44:45.875 }, 00:44:45.875 "peer_address": { 00:44:45.875 "trtype": "TCP", 00:44:45.875 "adrfam": "IPv4", 00:44:45.875 "traddr": "10.0.0.1", 00:44:45.875 "trsvcid": "37442" 00:44:45.875 }, 00:44:45.875 "auth": { 00:44:45.875 "state": "completed", 00:44:45.875 "digest": "sha256", 00:44:45.875 "dhgroup": "ffdhe6144" 00:44:45.875 } 00:44:45.875 } 00:44:45.875 ]' 00:44:45.875 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:45.875 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:45.875 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:46.133 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:46.133 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:46.133 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:46.133 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:46.133 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:46.391 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:46.391 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:47.325 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:47.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:47.325 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:47.325 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.325 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:47.325 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.325 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:47.325 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:47.325 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:47.583 09:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:48.148 00:44:48.148 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:48.148 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:48.148 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:48.406 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:48.406 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:48.406 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.406 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:48.406 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.406 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:48.406 { 00:44:48.406 "cntlid": 35, 00:44:48.406 "qid": 0, 00:44:48.406 "state": "enabled", 00:44:48.407 "thread": "nvmf_tgt_poll_group_000", 00:44:48.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:48.407 "listen_address": { 00:44:48.407 "trtype": "TCP", 00:44:48.407 "adrfam": "IPv4", 00:44:48.407 "traddr": "10.0.0.3", 00:44:48.407 "trsvcid": "4420" 00:44:48.407 }, 00:44:48.407 "peer_address": { 00:44:48.407 "trtype": "TCP", 00:44:48.407 "adrfam": "IPv4", 00:44:48.407 "traddr": "10.0.0.1", 00:44:48.407 "trsvcid": "37456" 00:44:48.407 }, 00:44:48.407 "auth": { 00:44:48.407 "state": "completed", 00:44:48.407 "digest": "sha256", 00:44:48.407 "dhgroup": "ffdhe6144" 00:44:48.407 } 00:44:48.407 } 00:44:48.407 ]' 00:44:48.407 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:48.407 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:48.407 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:48.407 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:48.407 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:48.407 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:48.407 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:48.407 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:48.665 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:48.665 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:49.599 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:49.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:49.599 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:49.599 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.599 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:49.599 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.599 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:49.599 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:49.599 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.164 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.165 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:50.165 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:50.165 09:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:50.730 00:44:50.730 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:50.730 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:50.730 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:50.988 { 00:44:50.988 "cntlid": 37, 00:44:50.988 "qid": 0, 00:44:50.988 "state": "enabled", 00:44:50.988 "thread": "nvmf_tgt_poll_group_000", 00:44:50.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:50.988 "listen_address": { 00:44:50.988 "trtype": "TCP", 00:44:50.988 "adrfam": "IPv4", 00:44:50.988 "traddr": "10.0.0.3", 00:44:50.988 "trsvcid": "4420" 00:44:50.988 }, 00:44:50.988 "peer_address": { 00:44:50.988 "trtype": "TCP", 00:44:50.988 "adrfam": "IPv4", 00:44:50.988 "traddr": "10.0.0.1", 00:44:50.988 "trsvcid": "49934" 00:44:50.988 }, 00:44:50.988 "auth": { 00:44:50.988 "state": "completed", 00:44:50.988 "digest": "sha256", 00:44:50.988 "dhgroup": "ffdhe6144" 00:44:50.988 } 00:44:50.988 } 00:44:50.988 ]' 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:50.988 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:51.246 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:51.246 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:44:52.268 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:52.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:52.268 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:52.268 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:52.268 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:52.268 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:52.268 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:52.268 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:52.268 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:52.526 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:44:52.526 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:52.526 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:52.527 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:53.093 00:44:53.093 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:53.093 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:53.093 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:53.351 { 00:44:53.351 "cntlid": 39, 00:44:53.351 "qid": 0, 00:44:53.351 "state": "enabled", 00:44:53.351 "thread": "nvmf_tgt_poll_group_000", 00:44:53.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:53.351 "listen_address": { 00:44:53.351 "trtype": "TCP", 00:44:53.351 "adrfam": "IPv4", 00:44:53.351 "traddr": "10.0.0.3", 00:44:53.351 "trsvcid": "4420" 00:44:53.351 }, 00:44:53.351 "peer_address": { 00:44:53.351 "trtype": "TCP", 00:44:53.351 "adrfam": "IPv4", 00:44:53.351 "traddr": "10.0.0.1", 00:44:53.351 "trsvcid": "49954" 00:44:53.351 }, 00:44:53.351 "auth": { 00:44:53.351 "state": "completed", 00:44:53.351 "digest": "sha256", 00:44:53.351 "dhgroup": "ffdhe6144" 00:44:53.351 } 00:44:53.351 } 00:44:53.351 ]' 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:53.351 09:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:53.917 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:53.917 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:44:54.483 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:54.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:54.483 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:54.483 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.483 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:54.483 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.483 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:54.483 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:54.483 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:54.483 09:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:55.049 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.050 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:55.050 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:55.050 09:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:55.614 00:44:55.614 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:55.614 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:55.615 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:56.180 { 00:44:56.180 "cntlid": 41, 00:44:56.180 "qid": 0, 00:44:56.180 "state": "enabled", 00:44:56.180 "thread": "nvmf_tgt_poll_group_000", 00:44:56.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:56.180 "listen_address": { 00:44:56.180 "trtype": "TCP", 00:44:56.180 "adrfam": "IPv4", 00:44:56.180 "traddr": "10.0.0.3", 00:44:56.180 "trsvcid": "4420" 00:44:56.180 }, 00:44:56.180 "peer_address": { 00:44:56.180 "trtype": "TCP", 00:44:56.180 "adrfam": "IPv4", 00:44:56.180 "traddr": "10.0.0.1", 00:44:56.180 "trsvcid": "49978" 00:44:56.180 }, 00:44:56.180 "auth": { 00:44:56.180 "state": "completed", 00:44:56.180 "digest": "sha256", 00:44:56.180 "dhgroup": "ffdhe8192" 00:44:56.180 } 00:44:56.180 } 00:44:56.180 ]' 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:56.180 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:56.439 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:56.439 09:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:57.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:57.372 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:58.306 00:44:58.306 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:58.306 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:58.306 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:58.306 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:58.306 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:58.306 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:58.306 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:58.564 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:58.564 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:58.564 { 00:44:58.564 "cntlid": 43, 00:44:58.564 "qid": 0, 00:44:58.564 "state": "enabled", 00:44:58.564 "thread": "nvmf_tgt_poll_group_000", 00:44:58.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:44:58.564 "listen_address": { 00:44:58.564 "trtype": "TCP", 00:44:58.564 "adrfam": "IPv4", 00:44:58.564 "traddr": "10.0.0.3", 00:44:58.564 "trsvcid": "4420" 00:44:58.564 }, 00:44:58.564 "peer_address": { 00:44:58.564 "trtype": "TCP", 00:44:58.564 "adrfam": "IPv4", 00:44:58.564 "traddr": "10.0.0.1", 00:44:58.564 "trsvcid": "50008" 00:44:58.564 }, 00:44:58.564 "auth": { 00:44:58.564 "state": "completed", 00:44:58.564 "digest": "sha256", 00:44:58.564 "dhgroup": "ffdhe8192" 00:44:58.564 } 00:44:58.564 } 00:44:58.564 ]' 00:44:58.565 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:58.565 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:58.565 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:58.565 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:58.565 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:58.565 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:58.565 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:58.565 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:58.823 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:58.823 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:44:59.759 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:59.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:59.759 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:44:59.759 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:59.759 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:59.759 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:59.759 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:59.759 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:59.759 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:00.016 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:00.946 00:45:00.946 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:00.946 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:00.946 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:01.204 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:01.204 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:01.204 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.204 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:01.204 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.204 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:01.204 { 00:45:01.204 "cntlid": 45, 00:45:01.204 "qid": 0, 00:45:01.204 "state": "enabled", 00:45:01.204 "thread": "nvmf_tgt_poll_group_000", 00:45:01.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:01.204 "listen_address": { 00:45:01.204 "trtype": "TCP", 00:45:01.204 "adrfam": "IPv4", 00:45:01.204 "traddr": "10.0.0.3", 00:45:01.204 "trsvcid": "4420" 00:45:01.204 }, 00:45:01.204 "peer_address": { 00:45:01.204 "trtype": "TCP", 00:45:01.204 "adrfam": "IPv4", 00:45:01.204 "traddr": "10.0.0.1", 00:45:01.204 "trsvcid": "39444" 00:45:01.204 }, 00:45:01.204 "auth": { 00:45:01.204 "state": "completed", 00:45:01.204 "digest": "sha256", 00:45:01.204 "dhgroup": "ffdhe8192" 00:45:01.204 } 00:45:01.204 } 00:45:01.204 ]' 00:45:01.204 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:01.204 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:45:01.204 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:01.462 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:01.462 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:01.462 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:01.462 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:01.462 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:01.718 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:01.718 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:02.283 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:02.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:02.283 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:02.283 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:02.283 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:02.283 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:02.283 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:02.283 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:45:02.283 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:45:02.849 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:45:02.849 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:02.849 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:45:02.849 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:45:02.849 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:45:02.849 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:02.849 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:45:02.849 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:02.850 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:02.850 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:02.850 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:02.850 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:02.850 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:03.435 00:45:03.435 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:03.435 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:03.435 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:04.001 { 00:45:04.001 "cntlid": 47, 00:45:04.001 "qid": 0, 00:45:04.001 "state": "enabled", 00:45:04.001 "thread": "nvmf_tgt_poll_group_000", 00:45:04.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:04.001 "listen_address": { 00:45:04.001 "trtype": "TCP", 00:45:04.001 "adrfam": "IPv4", 00:45:04.001 "traddr": "10.0.0.3", 00:45:04.001 "trsvcid": "4420" 00:45:04.001 }, 00:45:04.001 "peer_address": { 00:45:04.001 "trtype": "TCP", 00:45:04.001 "adrfam": "IPv4", 00:45:04.001 "traddr": "10.0.0.1", 00:45:04.001 "trsvcid": "39468" 00:45:04.001 }, 00:45:04.001 "auth": { 00:45:04.001 "state": "completed", 00:45:04.001 "digest": "sha256", 00:45:04.001 "dhgroup": "ffdhe8192" 00:45:04.001 } 00:45:04.001 } 00:45:04.001 ]' 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:04.001 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:04.258 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:04.258 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:05.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:45:05.229 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:45:05.488 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:45:05.488 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:05.488 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:05.488 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:45:05.488 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:45:05.488 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:05.488 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:05.489 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.489 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:05.489 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.489 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:05.489 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:05.489 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:05.747 00:45:05.747 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:05.747 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:05.747 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:06.313 { 00:45:06.313 "cntlid": 49, 00:45:06.313 "qid": 0, 00:45:06.313 "state": "enabled", 00:45:06.313 "thread": "nvmf_tgt_poll_group_000", 00:45:06.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:06.313 "listen_address": { 00:45:06.313 "trtype": "TCP", 00:45:06.313 "adrfam": "IPv4", 00:45:06.313 "traddr": "10.0.0.3", 00:45:06.313 "trsvcid": "4420" 00:45:06.313 }, 00:45:06.313 "peer_address": { 00:45:06.313 "trtype": "TCP", 00:45:06.313 "adrfam": "IPv4", 00:45:06.313 "traddr": "10.0.0.1", 00:45:06.313 "trsvcid": "39484" 00:45:06.313 }, 00:45:06.313 "auth": { 00:45:06.313 "state": "completed", 00:45:06.313 "digest": "sha384", 00:45:06.313 "dhgroup": "null" 00:45:06.313 } 00:45:06.313 } 00:45:06.313 ]' 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:06.313 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:06.572 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:06.572 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:07.505 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:07.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:07.506 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:07.506 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.506 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:07.506 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.506 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:07.506 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:45:07.506 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:07.771 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:08.341 00:45:08.341 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:08.341 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:08.341 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:08.599 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:08.599 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:08.599 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:08.599 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:08.599 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:08.599 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:08.599 { 00:45:08.599 "cntlid": 51, 00:45:08.599 "qid": 0, 00:45:08.599 "state": "enabled", 00:45:08.599 "thread": "nvmf_tgt_poll_group_000", 00:45:08.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:08.599 "listen_address": { 00:45:08.599 "trtype": "TCP", 00:45:08.599 "adrfam": "IPv4", 00:45:08.599 "traddr": "10.0.0.3", 00:45:08.599 "trsvcid": "4420" 00:45:08.599 }, 00:45:08.599 "peer_address": { 00:45:08.599 "trtype": "TCP", 00:45:08.599 "adrfam": "IPv4", 00:45:08.599 "traddr": "10.0.0.1", 00:45:08.599 "trsvcid": "39514" 00:45:08.599 }, 00:45:08.599 "auth": { 00:45:08.599 "state": "completed", 00:45:08.599 "digest": "sha384", 00:45:08.599 "dhgroup": "null" 00:45:08.599 } 00:45:08.599 } 00:45:08.599 ]' 00:45:08.599 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:08.856 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:08.856 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:08.856 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:45:08.856 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:08.856 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:08.856 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:08.856 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:09.114 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:09.114 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:10.047 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:10.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:10.047 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:10.047 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.047 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.047 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.047 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:10.047 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:45:10.047 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:10.305 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:10.563 00:45:10.563 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:10.563 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:10.563 09:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:11.129 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:11.129 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:11.129 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:11.130 { 00:45:11.130 "cntlid": 53, 00:45:11.130 "qid": 0, 00:45:11.130 "state": "enabled", 00:45:11.130 "thread": "nvmf_tgt_poll_group_000", 00:45:11.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:11.130 "listen_address": { 00:45:11.130 "trtype": "TCP", 00:45:11.130 "adrfam": "IPv4", 00:45:11.130 "traddr": "10.0.0.3", 00:45:11.130 "trsvcid": "4420" 00:45:11.130 }, 00:45:11.130 "peer_address": { 00:45:11.130 "trtype": "TCP", 00:45:11.130 "adrfam": "IPv4", 00:45:11.130 "traddr": "10.0.0.1", 00:45:11.130 "trsvcid": "50192" 00:45:11.130 }, 00:45:11.130 "auth": { 00:45:11.130 "state": "completed", 00:45:11.130 "digest": "sha384", 00:45:11.130 "dhgroup": "null" 00:45:11.130 } 00:45:11.130 } 00:45:11.130 ]' 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:11.130 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:11.388 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:11.388 09:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:12.321 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:12.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:12.321 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:12.321 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.321 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:12.321 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.321 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:12.321 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:45:12.321 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:45:12.579 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:45:12.579 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:12.579 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:12.579 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:45:12.579 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:45:12.579 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:12.579 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:45:12.579 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.579 09:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:12.579 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.579 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:12.579 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:12.579 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:13.144 00:45:13.145 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:13.145 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:13.145 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:13.403 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:13.403 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:13.403 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:13.403 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:13.403 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:13.403 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:13.403 { 00:45:13.403 "cntlid": 55, 00:45:13.403 "qid": 0, 00:45:13.403 "state": "enabled", 00:45:13.403 "thread": "nvmf_tgt_poll_group_000", 00:45:13.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:13.403 "listen_address": { 00:45:13.403 "trtype": "TCP", 00:45:13.403 "adrfam": "IPv4", 00:45:13.403 "traddr": "10.0.0.3", 00:45:13.403 "trsvcid": "4420" 00:45:13.403 }, 00:45:13.403 "peer_address": { 00:45:13.403 "trtype": "TCP", 00:45:13.403 "adrfam": "IPv4", 00:45:13.403 "traddr": "10.0.0.1", 00:45:13.403 "trsvcid": "50218" 00:45:13.403 }, 00:45:13.403 "auth": { 00:45:13.403 "state": "completed", 00:45:13.403 "digest": "sha384", 00:45:13.403 "dhgroup": "null" 00:45:13.403 } 00:45:13.403 } 00:45:13.403 ]' 00:45:13.403 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:13.403 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:13.403 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:13.662 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:45:13.662 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:13.662 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:13.662 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:13.662 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:13.920 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:13.920 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:14.876 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:14.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:14.876 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:14.877 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.877 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:14.877 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.877 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:45:14.877 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:14.877 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:14.877 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:15.136 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:15.394 00:45:15.653 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:15.653 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:15.653 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:15.911 { 00:45:15.911 "cntlid": 57, 00:45:15.911 "qid": 0, 00:45:15.911 "state": "enabled", 00:45:15.911 "thread": "nvmf_tgt_poll_group_000", 00:45:15.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:15.911 "listen_address": { 00:45:15.911 "trtype": "TCP", 00:45:15.911 "adrfam": "IPv4", 00:45:15.911 "traddr": "10.0.0.3", 00:45:15.911 "trsvcid": "4420" 00:45:15.911 }, 00:45:15.911 "peer_address": { 00:45:15.911 "trtype": "TCP", 00:45:15.911 "adrfam": "IPv4", 00:45:15.911 "traddr": "10.0.0.1", 00:45:15.911 "trsvcid": "50236" 00:45:15.911 }, 00:45:15.911 "auth": { 00:45:15.911 "state": "completed", 00:45:15.911 "digest": "sha384", 00:45:15.911 "dhgroup": "ffdhe2048" 00:45:15.911 } 00:45:15.911 } 00:45:15.911 ]' 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:45:15.911 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:16.169 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:16.169 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:16.169 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:16.429 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:16.429 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:17.365 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:17.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:17.365 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:17.365 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.365 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:17.365 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.365 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:17.365 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:17.365 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:17.623 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:45:17.623 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:17.624 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:17.881 00:45:17.881 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:17.881 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:17.881 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:18.449 { 00:45:18.449 "cntlid": 59, 00:45:18.449 "qid": 0, 00:45:18.449 "state": "enabled", 00:45:18.449 "thread": "nvmf_tgt_poll_group_000", 00:45:18.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:18.449 "listen_address": { 00:45:18.449 "trtype": "TCP", 00:45:18.449 "adrfam": "IPv4", 00:45:18.449 "traddr": "10.0.0.3", 00:45:18.449 "trsvcid": "4420" 00:45:18.449 }, 00:45:18.449 "peer_address": { 00:45:18.449 "trtype": "TCP", 00:45:18.449 "adrfam": "IPv4", 00:45:18.449 "traddr": "10.0.0.1", 00:45:18.449 "trsvcid": "50268" 00:45:18.449 }, 00:45:18.449 "auth": { 00:45:18.449 "state": "completed", 00:45:18.449 "digest": "sha384", 00:45:18.449 "dhgroup": "ffdhe2048" 00:45:18.449 } 00:45:18.449 } 00:45:18.449 ]' 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:18.449 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:19.016 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:19.016 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:19.952 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:19.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:19.952 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:19.952 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:19.952 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:19.952 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:19.952 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:19.952 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:19.952 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:20.212 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:20.779 00:45:20.779 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:20.779 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:20.779 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:21.037 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:21.037 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:21.037 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:21.037 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:21.037 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:21.037 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:21.037 { 00:45:21.037 "cntlid": 61, 00:45:21.037 "qid": 0, 00:45:21.037 "state": "enabled", 00:45:21.037 "thread": "nvmf_tgt_poll_group_000", 00:45:21.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:21.037 "listen_address": { 00:45:21.037 "trtype": "TCP", 00:45:21.037 "adrfam": "IPv4", 00:45:21.037 "traddr": "10.0.0.3", 00:45:21.037 "trsvcid": "4420" 00:45:21.037 }, 00:45:21.037 "peer_address": { 00:45:21.037 "trtype": "TCP", 00:45:21.037 "adrfam": "IPv4", 00:45:21.037 "traddr": "10.0.0.1", 00:45:21.038 "trsvcid": "35672" 00:45:21.038 }, 00:45:21.038 "auth": { 00:45:21.038 "state": "completed", 00:45:21.038 "digest": "sha384", 00:45:21.038 "dhgroup": "ffdhe2048" 00:45:21.038 } 00:45:21.038 } 00:45:21.038 ]' 00:45:21.038 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:21.038 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:21.038 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:21.038 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:45:21.038 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:21.038 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:21.038 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:21.038 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:21.606 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:21.606 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:22.173 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:22.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:22.173 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:22.173 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.173 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:22.173 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.173 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:22.173 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:22.173 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:45:22.432 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:45:22.432 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:22.432 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:22.432 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:45:22.432 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:45:22.432 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:22.432 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:45:22.432 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.432 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:22.690 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.690 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:22.690 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:22.690 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:22.949 00:45:22.949 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:22.949 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:22.949 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:23.207 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:23.207 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:23.207 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:23.207 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:23.466 { 00:45:23.466 "cntlid": 63, 00:45:23.466 "qid": 0, 00:45:23.466 "state": "enabled", 00:45:23.466 "thread": "nvmf_tgt_poll_group_000", 00:45:23.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:23.466 "listen_address": { 00:45:23.466 "trtype": "TCP", 00:45:23.466 "adrfam": "IPv4", 00:45:23.466 "traddr": "10.0.0.3", 00:45:23.466 "trsvcid": "4420" 00:45:23.466 }, 00:45:23.466 "peer_address": { 00:45:23.466 "trtype": "TCP", 00:45:23.466 "adrfam": "IPv4", 00:45:23.466 "traddr": "10.0.0.1", 00:45:23.466 "trsvcid": "35682" 00:45:23.466 }, 00:45:23.466 "auth": { 00:45:23.466 "state": "completed", 00:45:23.466 "digest": "sha384", 00:45:23.466 "dhgroup": "ffdhe2048" 00:45:23.466 } 00:45:23.466 } 00:45:23.466 ]' 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:23.466 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:24.031 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:24.031 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:24.701 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:24.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:24.701 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:24.701 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.701 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:24.701 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.701 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:45:24.701 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:24.701 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:24.701 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:24.958 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:25.523 00:45:25.523 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:25.523 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:25.523 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:25.782 { 00:45:25.782 "cntlid": 65, 00:45:25.782 "qid": 0, 00:45:25.782 "state": "enabled", 00:45:25.782 "thread": "nvmf_tgt_poll_group_000", 00:45:25.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:25.782 "listen_address": { 00:45:25.782 "trtype": "TCP", 00:45:25.782 "adrfam": "IPv4", 00:45:25.782 "traddr": "10.0.0.3", 00:45:25.782 "trsvcid": "4420" 00:45:25.782 }, 00:45:25.782 "peer_address": { 00:45:25.782 "trtype": "TCP", 00:45:25.782 "adrfam": "IPv4", 00:45:25.782 "traddr": "10.0.0.1", 00:45:25.782 "trsvcid": "35720" 00:45:25.782 }, 00:45:25.782 "auth": { 00:45:25.782 "state": "completed", 00:45:25.782 "digest": "sha384", 00:45:25.782 "dhgroup": "ffdhe3072" 00:45:25.782 } 00:45:25.782 } 00:45:25.782 ]' 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:25.782 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:26.348 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:26.348 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:26.915 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:26.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:26.915 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:26.915 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:26.915 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:26.915 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:26.915 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:26.915 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:26.915 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:27.174 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:27.745 00:45:27.745 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:27.745 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:27.745 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:28.003 { 00:45:28.003 "cntlid": 67, 00:45:28.003 "qid": 0, 00:45:28.003 "state": "enabled", 00:45:28.003 "thread": "nvmf_tgt_poll_group_000", 00:45:28.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:28.003 "listen_address": { 00:45:28.003 "trtype": "TCP", 00:45:28.003 "adrfam": "IPv4", 00:45:28.003 "traddr": "10.0.0.3", 00:45:28.003 "trsvcid": "4420" 00:45:28.003 }, 00:45:28.003 "peer_address": { 00:45:28.003 "trtype": "TCP", 00:45:28.003 "adrfam": "IPv4", 00:45:28.003 "traddr": "10.0.0.1", 00:45:28.003 "trsvcid": "35750" 00:45:28.003 }, 00:45:28.003 "auth": { 00:45:28.003 "state": "completed", 00:45:28.003 "digest": "sha384", 00:45:28.003 "dhgroup": "ffdhe3072" 00:45:28.003 } 00:45:28.003 } 00:45:28.003 ]' 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:28.003 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:28.261 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:28.261 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:29.197 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:29.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:29.197 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:29.197 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.197 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.197 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.197 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:29.197 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:29.197 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:29.456 09:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:29.715 00:45:29.715 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:29.715 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:29.715 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:29.974 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:30.233 { 00:45:30.233 "cntlid": 69, 00:45:30.233 "qid": 0, 00:45:30.233 "state": "enabled", 00:45:30.233 "thread": "nvmf_tgt_poll_group_000", 00:45:30.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:30.233 "listen_address": { 00:45:30.233 "trtype": "TCP", 00:45:30.233 "adrfam": "IPv4", 00:45:30.233 "traddr": "10.0.0.3", 00:45:30.233 "trsvcid": "4420" 00:45:30.233 }, 00:45:30.233 "peer_address": { 00:45:30.233 "trtype": "TCP", 00:45:30.233 "adrfam": "IPv4", 00:45:30.233 "traddr": "10.0.0.1", 00:45:30.233 "trsvcid": "44524" 00:45:30.233 }, 00:45:30.233 "auth": { 00:45:30.233 "state": "completed", 00:45:30.233 "digest": "sha384", 00:45:30.233 "dhgroup": "ffdhe3072" 00:45:30.233 } 00:45:30.233 } 00:45:30.233 ]' 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:30.233 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:30.535 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:30.535 09:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:31.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:31.473 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:32.039 00:45:32.039 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:32.039 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:32.039 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:32.298 { 00:45:32.298 "cntlid": 71, 00:45:32.298 "qid": 0, 00:45:32.298 "state": "enabled", 00:45:32.298 "thread": "nvmf_tgt_poll_group_000", 00:45:32.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:32.298 "listen_address": { 00:45:32.298 "trtype": "TCP", 00:45:32.298 "adrfam": "IPv4", 00:45:32.298 "traddr": "10.0.0.3", 00:45:32.298 "trsvcid": "4420" 00:45:32.298 }, 00:45:32.298 "peer_address": { 00:45:32.298 "trtype": "TCP", 00:45:32.298 "adrfam": "IPv4", 00:45:32.298 "traddr": "10.0.0.1", 00:45:32.298 "trsvcid": "44568" 00:45:32.298 }, 00:45:32.298 "auth": { 00:45:32.298 "state": "completed", 00:45:32.298 "digest": "sha384", 00:45:32.298 "dhgroup": "ffdhe3072" 00:45:32.298 } 00:45:32.298 } 00:45:32.298 ]' 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:32.298 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:32.864 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:32.865 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:33.432 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:33.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:33.432 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:33.432 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.432 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:33.432 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.432 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:45:33.432 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:33.432 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:33.432 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:33.691 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:34.259 00:45:34.259 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:34.259 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:34.259 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:34.518 { 00:45:34.518 "cntlid": 73, 00:45:34.518 "qid": 0, 00:45:34.518 "state": "enabled", 00:45:34.518 "thread": "nvmf_tgt_poll_group_000", 00:45:34.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:34.518 "listen_address": { 00:45:34.518 "trtype": "TCP", 00:45:34.518 "adrfam": "IPv4", 00:45:34.518 "traddr": "10.0.0.3", 00:45:34.518 "trsvcid": "4420" 00:45:34.518 }, 00:45:34.518 "peer_address": { 00:45:34.518 "trtype": "TCP", 00:45:34.518 "adrfam": "IPv4", 00:45:34.518 "traddr": "10.0.0.1", 00:45:34.518 "trsvcid": "44592" 00:45:34.518 }, 00:45:34.518 "auth": { 00:45:34.518 "state": "completed", 00:45:34.518 "digest": "sha384", 00:45:34.518 "dhgroup": "ffdhe4096" 00:45:34.518 } 00:45:34.518 } 00:45:34.518 ]' 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:34.518 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:35.084 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:35.084 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:35.650 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:35.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:35.650 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:35.650 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.650 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:35.650 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.650 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:35.650 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:35.650 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:35.910 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:36.476 00:45:36.476 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:36.476 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:36.476 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:36.735 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:36.735 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:36.735 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.735 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:36.735 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.735 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:36.735 { 00:45:36.735 "cntlid": 75, 00:45:36.735 "qid": 0, 00:45:36.735 "state": "enabled", 00:45:36.735 "thread": "nvmf_tgt_poll_group_000", 00:45:36.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:36.735 "listen_address": { 00:45:36.735 "trtype": "TCP", 00:45:36.735 "adrfam": "IPv4", 00:45:36.735 "traddr": "10.0.0.3", 00:45:36.735 "trsvcid": "4420" 00:45:36.735 }, 00:45:36.735 "peer_address": { 00:45:36.735 "trtype": "TCP", 00:45:36.735 "adrfam": "IPv4", 00:45:36.736 "traddr": "10.0.0.1", 00:45:36.736 "trsvcid": "44610" 00:45:36.736 }, 00:45:36.736 "auth": { 00:45:36.736 "state": "completed", 00:45:36.736 "digest": "sha384", 00:45:36.736 "dhgroup": "ffdhe4096" 00:45:36.736 } 00:45:36.736 } 00:45:36.736 ]' 00:45:36.736 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:36.736 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:36.736 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:36.994 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:45:36.994 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:36.994 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:36.994 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:36.994 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:37.252 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:37.252 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:37.822 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:37.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:37.822 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:37.822 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.822 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:37.822 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.822 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:37.822 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:37.822 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:38.081 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:38.082 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:38.648 00:45:38.648 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:38.648 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:38.648 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:38.906 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:38.906 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:38.906 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.906 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:38.906 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.906 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:38.906 { 00:45:38.906 "cntlid": 77, 00:45:38.906 "qid": 0, 00:45:38.907 "state": "enabled", 00:45:38.907 "thread": "nvmf_tgt_poll_group_000", 00:45:38.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:38.907 "listen_address": { 00:45:38.907 "trtype": "TCP", 00:45:38.907 "adrfam": "IPv4", 00:45:38.907 "traddr": "10.0.0.3", 00:45:38.907 "trsvcid": "4420" 00:45:38.907 }, 00:45:38.907 "peer_address": { 00:45:38.907 "trtype": "TCP", 00:45:38.907 "adrfam": "IPv4", 00:45:38.907 "traddr": "10.0.0.1", 00:45:38.907 "trsvcid": "44632" 00:45:38.907 }, 00:45:38.907 "auth": { 00:45:38.907 "state": "completed", 00:45:38.907 "digest": "sha384", 00:45:38.907 "dhgroup": "ffdhe4096" 00:45:38.907 } 00:45:38.907 } 00:45:38.907 ]' 00:45:38.907 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:38.907 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:38.907 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:39.165 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:45:39.165 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:39.165 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:39.165 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:39.165 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:39.423 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:39.423 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:40.356 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:40.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:40.356 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:40.356 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:40.356 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:40.356 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:40.356 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:40.356 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:40.356 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:40.613 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:40.872 00:45:41.130 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:41.130 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:41.130 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:41.388 { 00:45:41.388 "cntlid": 79, 00:45:41.388 "qid": 0, 00:45:41.388 "state": "enabled", 00:45:41.388 "thread": "nvmf_tgt_poll_group_000", 00:45:41.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:41.388 "listen_address": { 00:45:41.388 "trtype": "TCP", 00:45:41.388 "adrfam": "IPv4", 00:45:41.388 "traddr": "10.0.0.3", 00:45:41.388 "trsvcid": "4420" 00:45:41.388 }, 00:45:41.388 "peer_address": { 00:45:41.388 "trtype": "TCP", 00:45:41.388 "adrfam": "IPv4", 00:45:41.388 "traddr": "10.0.0.1", 00:45:41.388 "trsvcid": "43728" 00:45:41.388 }, 00:45:41.388 "auth": { 00:45:41.388 "state": "completed", 00:45:41.388 "digest": "sha384", 00:45:41.388 "dhgroup": "ffdhe4096" 00:45:41.388 } 00:45:41.388 } 00:45:41.388 ]' 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:41.388 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:41.647 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:41.647 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:42.580 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:42.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:42.580 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:42.580 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.580 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:42.580 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.580 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:45:42.580 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:42.580 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:42.580 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:42.839 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:43.404 00:45:43.404 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:43.404 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:43.404 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:43.661 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:43.661 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:43.661 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:43.661 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:43.661 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:43.661 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:43.661 { 00:45:43.661 "cntlid": 81, 00:45:43.661 "qid": 0, 00:45:43.661 "state": "enabled", 00:45:43.661 "thread": "nvmf_tgt_poll_group_000", 00:45:43.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:43.661 "listen_address": { 00:45:43.661 "trtype": "TCP", 00:45:43.661 "adrfam": "IPv4", 00:45:43.662 "traddr": "10.0.0.3", 00:45:43.662 "trsvcid": "4420" 00:45:43.662 }, 00:45:43.662 "peer_address": { 00:45:43.662 "trtype": "TCP", 00:45:43.662 "adrfam": "IPv4", 00:45:43.662 "traddr": "10.0.0.1", 00:45:43.662 "trsvcid": "43762" 00:45:43.662 }, 00:45:43.662 "auth": { 00:45:43.662 "state": "completed", 00:45:43.662 "digest": "sha384", 00:45:43.662 "dhgroup": "ffdhe6144" 00:45:43.662 } 00:45:43.662 } 00:45:43.662 ]' 00:45:43.662 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:43.662 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:43.662 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:43.662 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:45:43.662 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:43.662 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:43.662 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:43.662 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:43.919 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:43.919 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:44.854 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:44.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:44.854 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:44.854 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:44.854 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:44.854 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:44.854 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:44.854 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:44.854 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:45.112 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:45.677 00:45:45.677 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:45.677 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:45.677 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:45.935 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:45.935 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:45.935 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.936 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:45.936 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.936 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:45.936 { 00:45:45.936 "cntlid": 83, 00:45:45.936 "qid": 0, 00:45:45.936 "state": "enabled", 00:45:45.936 "thread": "nvmf_tgt_poll_group_000", 00:45:45.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:45.936 "listen_address": { 00:45:45.936 "trtype": "TCP", 00:45:45.936 "adrfam": "IPv4", 00:45:45.936 "traddr": "10.0.0.3", 00:45:45.936 "trsvcid": "4420" 00:45:45.936 }, 00:45:45.936 "peer_address": { 00:45:45.936 "trtype": "TCP", 00:45:45.936 "adrfam": "IPv4", 00:45:45.936 "traddr": "10.0.0.1", 00:45:45.936 "trsvcid": "43796" 00:45:45.936 }, 00:45:45.936 "auth": { 00:45:45.936 "state": "completed", 00:45:45.936 "digest": "sha384", 00:45:45.936 "dhgroup": "ffdhe6144" 00:45:45.936 } 00:45:45.936 } 00:45:45.936 ]' 00:45:45.936 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:45.936 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:45.936 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:46.194 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:45:46.194 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:46.194 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:46.194 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:46.194 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:46.452 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:46.452 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:47.388 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:47.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:47.388 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:47.388 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:47.388 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:47.388 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:47.388 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:47.388 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:47.388 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:47.646 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:47.903 00:45:48.161 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:48.161 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:48.161 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:48.420 { 00:45:48.420 "cntlid": 85, 00:45:48.420 "qid": 0, 00:45:48.420 "state": "enabled", 00:45:48.420 "thread": "nvmf_tgt_poll_group_000", 00:45:48.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:48.420 "listen_address": { 00:45:48.420 "trtype": "TCP", 00:45:48.420 "adrfam": "IPv4", 00:45:48.420 "traddr": "10.0.0.3", 00:45:48.420 "trsvcid": "4420" 00:45:48.420 }, 00:45:48.420 "peer_address": { 00:45:48.420 "trtype": "TCP", 00:45:48.420 "adrfam": "IPv4", 00:45:48.420 "traddr": "10.0.0.1", 00:45:48.420 "trsvcid": "43822" 00:45:48.420 }, 00:45:48.420 "auth": { 00:45:48.420 "state": "completed", 00:45:48.420 "digest": "sha384", 00:45:48.420 "dhgroup": "ffdhe6144" 00:45:48.420 } 00:45:48.420 } 00:45:48.420 ]' 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:48.420 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:48.678 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:48.678 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:49.615 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:49.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:49.615 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:49.615 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.615 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:49.615 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.615 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:49.615 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:49.615 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:49.873 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:50.440 00:45:50.440 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:50.440 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:50.440 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:50.698 { 00:45:50.698 "cntlid": 87, 00:45:50.698 "qid": 0, 00:45:50.698 "state": "enabled", 00:45:50.698 "thread": "nvmf_tgt_poll_group_000", 00:45:50.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:50.698 "listen_address": { 00:45:50.698 "trtype": "TCP", 00:45:50.698 "adrfam": "IPv4", 00:45:50.698 "traddr": "10.0.0.3", 00:45:50.698 "trsvcid": "4420" 00:45:50.698 }, 00:45:50.698 "peer_address": { 00:45:50.698 "trtype": "TCP", 00:45:50.698 "adrfam": "IPv4", 00:45:50.698 "traddr": "10.0.0.1", 00:45:50.698 "trsvcid": "33574" 00:45:50.698 }, 00:45:50.698 "auth": { 00:45:50.698 "state": "completed", 00:45:50.698 "digest": "sha384", 00:45:50.698 "dhgroup": "ffdhe6144" 00:45:50.698 } 00:45:50.698 } 00:45:50.698 ]' 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:45:50.698 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:50.956 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:50.956 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:50.956 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:51.215 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:51.215 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:45:51.782 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:51.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:51.782 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:51.782 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.782 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:51.782 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.782 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:45:51.782 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:51.782 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:51.782 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:52.040 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:52.974 00:45:52.974 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:52.974 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:52.974 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:53.232 { 00:45:53.232 "cntlid": 89, 00:45:53.232 "qid": 0, 00:45:53.232 "state": "enabled", 00:45:53.232 "thread": "nvmf_tgt_poll_group_000", 00:45:53.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:53.232 "listen_address": { 00:45:53.232 "trtype": "TCP", 00:45:53.232 "adrfam": "IPv4", 00:45:53.232 "traddr": "10.0.0.3", 00:45:53.232 "trsvcid": "4420" 00:45:53.232 }, 00:45:53.232 "peer_address": { 00:45:53.232 "trtype": "TCP", 00:45:53.232 "adrfam": "IPv4", 00:45:53.232 "traddr": "10.0.0.1", 00:45:53.232 "trsvcid": "33600" 00:45:53.232 }, 00:45:53.232 "auth": { 00:45:53.232 "state": "completed", 00:45:53.232 "digest": "sha384", 00:45:53.232 "dhgroup": "ffdhe8192" 00:45:53.232 } 00:45:53.232 } 00:45:53.232 ]' 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:53.232 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:53.798 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:53.798 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:45:54.732 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:54.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:54.732 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:54.732 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:54.732 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:54.732 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:54.732 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:54.732 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:54.732 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:54.732 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:45:54.732 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:54.732 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:54.732 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:45:54.732 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:45:54.732 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:54.732 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:54.732 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:54.732 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:54.991 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:54.991 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:54.991 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:54.991 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:55.557 00:45:55.557 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:55.557 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:55.557 09:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:55.815 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:55.815 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:55.815 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:55.815 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:55.815 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:55.815 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:55.815 { 00:45:55.815 "cntlid": 91, 00:45:55.815 "qid": 0, 00:45:55.815 "state": "enabled", 00:45:55.815 "thread": "nvmf_tgt_poll_group_000", 00:45:55.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:55.815 "listen_address": { 00:45:55.815 "trtype": "TCP", 00:45:55.815 "adrfam": "IPv4", 00:45:55.815 "traddr": "10.0.0.3", 00:45:55.815 "trsvcid": "4420" 00:45:55.815 }, 00:45:55.815 "peer_address": { 00:45:55.815 "trtype": "TCP", 00:45:55.815 "adrfam": "IPv4", 00:45:55.815 "traddr": "10.0.0.1", 00:45:55.815 "trsvcid": "33636" 00:45:55.815 }, 00:45:55.815 "auth": { 00:45:55.815 "state": "completed", 00:45:55.815 "digest": "sha384", 00:45:55.815 "dhgroup": "ffdhe8192" 00:45:55.815 } 00:45:55.815 } 00:45:55.815 ]' 00:45:55.815 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:55.815 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:55.815 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:56.074 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:56.074 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:56.074 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:56.074 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:56.074 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:56.332 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:56.332 09:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:45:56.899 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:56.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:56.899 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:56.899 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:56.899 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:56.899 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:56.899 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:56.899 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:56.899 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:57.486 09:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:58.052 00:45:58.052 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:58.052 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:58.052 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:58.310 { 00:45:58.310 "cntlid": 93, 00:45:58.310 "qid": 0, 00:45:58.310 "state": "enabled", 00:45:58.310 "thread": "nvmf_tgt_poll_group_000", 00:45:58.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:45:58.310 "listen_address": { 00:45:58.310 "trtype": "TCP", 00:45:58.310 "adrfam": "IPv4", 00:45:58.310 "traddr": "10.0.0.3", 00:45:58.310 "trsvcid": "4420" 00:45:58.310 }, 00:45:58.310 "peer_address": { 00:45:58.310 "trtype": "TCP", 00:45:58.310 "adrfam": "IPv4", 00:45:58.310 "traddr": "10.0.0.1", 00:45:58.310 "trsvcid": "33664" 00:45:58.310 }, 00:45:58.310 "auth": { 00:45:58.310 "state": "completed", 00:45:58.310 "digest": "sha384", 00:45:58.310 "dhgroup": "ffdhe8192" 00:45:58.310 } 00:45:58.310 } 00:45:58.310 ]' 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:58.310 09:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:58.876 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:58.876 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:45:59.442 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:59.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:59.442 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:45:59.442 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.442 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:59.442 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.442 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:59.442 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:59.442 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:59.700 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:00.266 00:46:00.266 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:00.266 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:00.266 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:00.832 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:00.832 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:00.832 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:00.832 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:00.833 { 00:46:00.833 "cntlid": 95, 00:46:00.833 "qid": 0, 00:46:00.833 "state": "enabled", 00:46:00.833 "thread": "nvmf_tgt_poll_group_000", 00:46:00.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:00.833 "listen_address": { 00:46:00.833 "trtype": "TCP", 00:46:00.833 "adrfam": "IPv4", 00:46:00.833 "traddr": "10.0.0.3", 00:46:00.833 "trsvcid": "4420" 00:46:00.833 }, 00:46:00.833 "peer_address": { 00:46:00.833 "trtype": "TCP", 00:46:00.833 "adrfam": "IPv4", 00:46:00.833 "traddr": "10.0.0.1", 00:46:00.833 "trsvcid": "34164" 00:46:00.833 }, 00:46:00.833 "auth": { 00:46:00.833 "state": "completed", 00:46:00.833 "digest": "sha384", 00:46:00.833 "dhgroup": "ffdhe8192" 00:46:00.833 } 00:46:00.833 } 00:46:00.833 ]' 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:00.833 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:01.091 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:01.091 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:02.025 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:02.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:02.026 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:02.026 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:02.026 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:02.026 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:02.026 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:46:02.026 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:46:02.026 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:02.026 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:46:02.026 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:02.285 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:02.544 00:46:02.544 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:02.544 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:02.544 09:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:02.802 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:02.802 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:02.802 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:02.802 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:02.802 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:02.802 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:02.802 { 00:46:02.802 "cntlid": 97, 00:46:02.802 "qid": 0, 00:46:02.802 "state": "enabled", 00:46:02.802 "thread": "nvmf_tgt_poll_group_000", 00:46:02.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:02.802 "listen_address": { 00:46:02.802 "trtype": "TCP", 00:46:02.802 "adrfam": "IPv4", 00:46:02.802 "traddr": "10.0.0.3", 00:46:02.802 "trsvcid": "4420" 00:46:02.802 }, 00:46:02.802 "peer_address": { 00:46:02.802 "trtype": "TCP", 00:46:02.802 "adrfam": "IPv4", 00:46:02.802 "traddr": "10.0.0.1", 00:46:02.802 "trsvcid": "34188" 00:46:02.802 }, 00:46:02.802 "auth": { 00:46:02.802 "state": "completed", 00:46:02.802 "digest": "sha512", 00:46:02.802 "dhgroup": "null" 00:46:02.802 } 00:46:02.802 } 00:46:02.802 ]' 00:46:02.802 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:03.060 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:03.060 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:03.060 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:46:03.060 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:03.060 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:03.060 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:03.060 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:03.320 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:03.320 09:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:04.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:04.299 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:04.865 00:46:04.865 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:04.865 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:04.865 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:05.123 { 00:46:05.123 "cntlid": 99, 00:46:05.123 "qid": 0, 00:46:05.123 "state": "enabled", 00:46:05.123 "thread": "nvmf_tgt_poll_group_000", 00:46:05.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:05.123 "listen_address": { 00:46:05.123 "trtype": "TCP", 00:46:05.123 "adrfam": "IPv4", 00:46:05.123 "traddr": "10.0.0.3", 00:46:05.123 "trsvcid": "4420" 00:46:05.123 }, 00:46:05.123 "peer_address": { 00:46:05.123 "trtype": "TCP", 00:46:05.123 "adrfam": "IPv4", 00:46:05.123 "traddr": "10.0.0.1", 00:46:05.123 "trsvcid": "34230" 00:46:05.123 }, 00:46:05.123 "auth": { 00:46:05.123 "state": "completed", 00:46:05.123 "digest": "sha512", 00:46:05.123 "dhgroup": "null" 00:46:05.123 } 00:46:05.123 } 00:46:05.123 ]' 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:05.123 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:05.383 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:05.383 09:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:06.317 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:06.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:06.317 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:06.317 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:06.317 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:06.317 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:06.317 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:06.317 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:46:06.317 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:06.575 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:06.833 00:46:06.833 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:06.833 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:06.833 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:07.091 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:07.091 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:07.091 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:07.091 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:07.091 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:07.091 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:07.091 { 00:46:07.091 "cntlid": 101, 00:46:07.091 "qid": 0, 00:46:07.091 "state": "enabled", 00:46:07.091 "thread": "nvmf_tgt_poll_group_000", 00:46:07.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:07.091 "listen_address": { 00:46:07.091 "trtype": "TCP", 00:46:07.091 "adrfam": "IPv4", 00:46:07.091 "traddr": "10.0.0.3", 00:46:07.091 "trsvcid": "4420" 00:46:07.091 }, 00:46:07.091 "peer_address": { 00:46:07.091 "trtype": "TCP", 00:46:07.091 "adrfam": "IPv4", 00:46:07.091 "traddr": "10.0.0.1", 00:46:07.091 "trsvcid": "34254" 00:46:07.091 }, 00:46:07.091 "auth": { 00:46:07.091 "state": "completed", 00:46:07.091 "digest": "sha512", 00:46:07.091 "dhgroup": "null" 00:46:07.091 } 00:46:07.091 } 00:46:07.091 ]' 00:46:07.091 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:07.091 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:07.349 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:07.349 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:46:07.349 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:07.349 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:07.349 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:07.349 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:07.608 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:07.608 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:08.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:08.543 09:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:09.110 00:46:09.110 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:09.110 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:09.110 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:09.368 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:09.368 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:09.368 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:09.368 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:09.368 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:09.368 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:09.368 { 00:46:09.368 "cntlid": 103, 00:46:09.368 "qid": 0, 00:46:09.368 "state": "enabled", 00:46:09.368 "thread": "nvmf_tgt_poll_group_000", 00:46:09.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:09.368 "listen_address": { 00:46:09.368 "trtype": "TCP", 00:46:09.368 "adrfam": "IPv4", 00:46:09.368 "traddr": "10.0.0.3", 00:46:09.368 "trsvcid": "4420" 00:46:09.368 }, 00:46:09.368 "peer_address": { 00:46:09.368 "trtype": "TCP", 00:46:09.368 "adrfam": "IPv4", 00:46:09.369 "traddr": "10.0.0.1", 00:46:09.369 "trsvcid": "34284" 00:46:09.369 }, 00:46:09.369 "auth": { 00:46:09.369 "state": "completed", 00:46:09.369 "digest": "sha512", 00:46:09.369 "dhgroup": "null" 00:46:09.369 } 00:46:09.369 } 00:46:09.369 ]' 00:46:09.369 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:09.369 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:09.369 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:09.369 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:46:09.369 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:09.369 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:09.369 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:09.369 09:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:09.627 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:09.627 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:10.562 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:10.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:10.562 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:10.562 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.562 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:10.562 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.562 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:46:10.562 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:10.562 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:46:10.562 09:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:10.821 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:11.080 00:46:11.080 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:11.080 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:11.080 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:11.367 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:11.367 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:11.367 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:11.367 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:11.367 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:11.367 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:11.367 { 00:46:11.367 "cntlid": 105, 00:46:11.367 "qid": 0, 00:46:11.367 "state": "enabled", 00:46:11.367 "thread": "nvmf_tgt_poll_group_000", 00:46:11.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:11.367 "listen_address": { 00:46:11.367 "trtype": "TCP", 00:46:11.367 "adrfam": "IPv4", 00:46:11.367 "traddr": "10.0.0.3", 00:46:11.367 "trsvcid": "4420" 00:46:11.367 }, 00:46:11.367 "peer_address": { 00:46:11.367 "trtype": "TCP", 00:46:11.367 "adrfam": "IPv4", 00:46:11.367 "traddr": "10.0.0.1", 00:46:11.367 "trsvcid": "58808" 00:46:11.367 }, 00:46:11.367 "auth": { 00:46:11.367 "state": "completed", 00:46:11.367 "digest": "sha512", 00:46:11.367 "dhgroup": "ffdhe2048" 00:46:11.368 } 00:46:11.368 } 00:46:11.368 ]' 00:46:11.368 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:11.368 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:11.368 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:11.368 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:46:11.368 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:11.626 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:11.626 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:11.626 09:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:11.884 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:11.884 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:12.451 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:12.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:12.451 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:12.451 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:12.451 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:12.451 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:12.452 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:12.452 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:46:12.452 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:12.710 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:13.277 00:46:13.277 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:13.277 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:13.277 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:13.536 { 00:46:13.536 "cntlid": 107, 00:46:13.536 "qid": 0, 00:46:13.536 "state": "enabled", 00:46:13.536 "thread": "nvmf_tgt_poll_group_000", 00:46:13.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:13.536 "listen_address": { 00:46:13.536 "trtype": "TCP", 00:46:13.536 "adrfam": "IPv4", 00:46:13.536 "traddr": "10.0.0.3", 00:46:13.536 "trsvcid": "4420" 00:46:13.536 }, 00:46:13.536 "peer_address": { 00:46:13.536 "trtype": "TCP", 00:46:13.536 "adrfam": "IPv4", 00:46:13.536 "traddr": "10.0.0.1", 00:46:13.536 "trsvcid": "58840" 00:46:13.536 }, 00:46:13.536 "auth": { 00:46:13.536 "state": "completed", 00:46:13.536 "digest": "sha512", 00:46:13.536 "dhgroup": "ffdhe2048" 00:46:13.536 } 00:46:13.536 } 00:46:13.536 ]' 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:46:13.536 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:13.536 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:13.536 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:13.536 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:14.103 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:14.103 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:14.670 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:14.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:14.670 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:14.670 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:14.670 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:14.670 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:14.670 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:14.670 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:46:14.670 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:14.928 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:15.495 00:46:15.495 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:15.495 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:15.495 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:15.752 { 00:46:15.752 "cntlid": 109, 00:46:15.752 "qid": 0, 00:46:15.752 "state": "enabled", 00:46:15.752 "thread": "nvmf_tgt_poll_group_000", 00:46:15.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:15.752 "listen_address": { 00:46:15.752 "trtype": "TCP", 00:46:15.752 "adrfam": "IPv4", 00:46:15.752 "traddr": "10.0.0.3", 00:46:15.752 "trsvcid": "4420" 00:46:15.752 }, 00:46:15.752 "peer_address": { 00:46:15.752 "trtype": "TCP", 00:46:15.752 "adrfam": "IPv4", 00:46:15.752 "traddr": "10.0.0.1", 00:46:15.752 "trsvcid": "58852" 00:46:15.752 }, 00:46:15.752 "auth": { 00:46:15.752 "state": "completed", 00:46:15.752 "digest": "sha512", 00:46:15.752 "dhgroup": "ffdhe2048" 00:46:15.752 } 00:46:15.752 } 00:46:15.752 ]' 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:15.752 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:16.011 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:16.011 09:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:16.946 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:16.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:16.946 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:16.946 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:16.946 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:16.946 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:16.946 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:16.946 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:46:16.946 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:17.204 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:17.462 00:46:17.462 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:17.462 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:17.462 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:18.034 { 00:46:18.034 "cntlid": 111, 00:46:18.034 "qid": 0, 00:46:18.034 "state": "enabled", 00:46:18.034 "thread": "nvmf_tgt_poll_group_000", 00:46:18.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:18.034 "listen_address": { 00:46:18.034 "trtype": "TCP", 00:46:18.034 "adrfam": "IPv4", 00:46:18.034 "traddr": "10.0.0.3", 00:46:18.034 "trsvcid": "4420" 00:46:18.034 }, 00:46:18.034 "peer_address": { 00:46:18.034 "trtype": "TCP", 00:46:18.034 "adrfam": "IPv4", 00:46:18.034 "traddr": "10.0.0.1", 00:46:18.034 "trsvcid": "58882" 00:46:18.034 }, 00:46:18.034 "auth": { 00:46:18.034 "state": "completed", 00:46:18.034 "digest": "sha512", 00:46:18.034 "dhgroup": "ffdhe2048" 00:46:18.034 } 00:46:18.034 } 00:46:18.034 ]' 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:18.034 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:18.292 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:18.292 09:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:19.225 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:19.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:19.225 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:19.225 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.225 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:19.225 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.225 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:46:19.225 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:19.225 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:46:19.225 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:19.482 09:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:19.740 00:46:19.740 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:19.740 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:19.740 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:19.998 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:19.998 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:19.998 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.998 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:19.998 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.998 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:19.998 { 00:46:19.998 "cntlid": 113, 00:46:19.998 "qid": 0, 00:46:19.998 "state": "enabled", 00:46:19.998 "thread": "nvmf_tgt_poll_group_000", 00:46:19.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:19.998 "listen_address": { 00:46:19.998 "trtype": "TCP", 00:46:19.998 "adrfam": "IPv4", 00:46:19.998 "traddr": "10.0.0.3", 00:46:19.998 "trsvcid": "4420" 00:46:19.998 }, 00:46:19.998 "peer_address": { 00:46:19.998 "trtype": "TCP", 00:46:19.998 "adrfam": "IPv4", 00:46:19.998 "traddr": "10.0.0.1", 00:46:19.998 "trsvcid": "37548" 00:46:19.998 }, 00:46:19.998 "auth": { 00:46:19.998 "state": "completed", 00:46:19.998 "digest": "sha512", 00:46:19.998 "dhgroup": "ffdhe3072" 00:46:19.998 } 00:46:19.998 } 00:46:19.998 ]' 00:46:19.998 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:20.255 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:20.255 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:20.255 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:46:20.255 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:20.255 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:20.255 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:20.255 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:20.513 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:20.513 09:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:21.081 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:21.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:21.081 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:21.081 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:21.081 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:21.081 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:21.081 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:21.081 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:46:21.081 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:21.648 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:21.906 00:46:21.906 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:21.906 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:21.906 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:22.164 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:22.164 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:22.164 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:22.164 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:22.165 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:22.165 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:22.165 { 00:46:22.165 "cntlid": 115, 00:46:22.165 "qid": 0, 00:46:22.165 "state": "enabled", 00:46:22.165 "thread": "nvmf_tgt_poll_group_000", 00:46:22.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:22.165 "listen_address": { 00:46:22.165 "trtype": "TCP", 00:46:22.165 "adrfam": "IPv4", 00:46:22.165 "traddr": "10.0.0.3", 00:46:22.165 "trsvcid": "4420" 00:46:22.165 }, 00:46:22.165 "peer_address": { 00:46:22.165 "trtype": "TCP", 00:46:22.165 "adrfam": "IPv4", 00:46:22.165 "traddr": "10.0.0.1", 00:46:22.165 "trsvcid": "37576" 00:46:22.165 }, 00:46:22.165 "auth": { 00:46:22.165 "state": "completed", 00:46:22.165 "digest": "sha512", 00:46:22.165 "dhgroup": "ffdhe3072" 00:46:22.165 } 00:46:22.165 } 00:46:22.165 ]' 00:46:22.165 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:22.165 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:22.165 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:22.422 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:46:22.422 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:22.422 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:22.422 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:22.422 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:22.681 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:22.681 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:23.247 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:23.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:23.247 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:23.247 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.247 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:23.247 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.247 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:23.247 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:46:23.247 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:46:23.506 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:46:23.506 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:23.506 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:23.506 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:46:23.506 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:46:23.506 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:23.506 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:23.506 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:23.506 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:23.764 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:23.764 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:23.764 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:23.764 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:24.023 00:46:24.023 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:24.023 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:24.023 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:24.282 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:24.282 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:24.282 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.282 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:24.282 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.282 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:24.282 { 00:46:24.282 "cntlid": 117, 00:46:24.282 "qid": 0, 00:46:24.282 "state": "enabled", 00:46:24.282 "thread": "nvmf_tgt_poll_group_000", 00:46:24.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:24.282 "listen_address": { 00:46:24.282 "trtype": "TCP", 00:46:24.282 "adrfam": "IPv4", 00:46:24.282 "traddr": "10.0.0.3", 00:46:24.282 "trsvcid": "4420" 00:46:24.282 }, 00:46:24.282 "peer_address": { 00:46:24.282 "trtype": "TCP", 00:46:24.282 "adrfam": "IPv4", 00:46:24.282 "traddr": "10.0.0.1", 00:46:24.282 "trsvcid": "37612" 00:46:24.282 }, 00:46:24.282 "auth": { 00:46:24.282 "state": "completed", 00:46:24.282 "digest": "sha512", 00:46:24.282 "dhgroup": "ffdhe3072" 00:46:24.282 } 00:46:24.282 } 00:46:24.282 ]' 00:46:24.282 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:24.541 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:24.541 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:24.541 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:46:24.541 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:24.541 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:24.541 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:24.541 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:24.800 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:24.800 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:25.734 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:25.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:25.734 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:25.734 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.734 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:25.734 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.734 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:25.734 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:46:25.734 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:25.993 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:26.251 00:46:26.251 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:26.251 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:26.251 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:26.509 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:26.509 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:26.509 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:26.509 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:26.509 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:26.509 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:26.509 { 00:46:26.509 "cntlid": 119, 00:46:26.509 "qid": 0, 00:46:26.509 "state": "enabled", 00:46:26.509 "thread": "nvmf_tgt_poll_group_000", 00:46:26.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:26.510 "listen_address": { 00:46:26.510 "trtype": "TCP", 00:46:26.510 "adrfam": "IPv4", 00:46:26.510 "traddr": "10.0.0.3", 00:46:26.510 "trsvcid": "4420" 00:46:26.510 }, 00:46:26.510 "peer_address": { 00:46:26.510 "trtype": "TCP", 00:46:26.510 "adrfam": "IPv4", 00:46:26.510 "traddr": "10.0.0.1", 00:46:26.510 "trsvcid": "37642" 00:46:26.510 }, 00:46:26.510 "auth": { 00:46:26.510 "state": "completed", 00:46:26.510 "digest": "sha512", 00:46:26.510 "dhgroup": "ffdhe3072" 00:46:26.510 } 00:46:26.510 } 00:46:26.510 ]' 00:46:26.510 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:26.768 09:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:26.768 09:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:26.768 09:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:46:26.768 09:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:26.768 09:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:26.768 09:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:26.768 09:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:27.027 09:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:27.027 09:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:27.961 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:27.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:27.961 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:27.961 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.961 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:27.961 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.961 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:46:27.961 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:27.961 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:46:27.961 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:28.219 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:28.477 00:46:28.477 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:28.477 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:28.477 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:28.736 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:28.736 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:28.736 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:28.736 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:28.736 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:28.736 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:28.736 { 00:46:28.736 "cntlid": 121, 00:46:28.736 "qid": 0, 00:46:28.736 "state": "enabled", 00:46:28.736 "thread": "nvmf_tgt_poll_group_000", 00:46:28.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:28.736 "listen_address": { 00:46:28.736 "trtype": "TCP", 00:46:28.736 "adrfam": "IPv4", 00:46:28.736 "traddr": "10.0.0.3", 00:46:28.736 "trsvcid": "4420" 00:46:28.736 }, 00:46:28.736 "peer_address": { 00:46:28.736 "trtype": "TCP", 00:46:28.736 "adrfam": "IPv4", 00:46:28.736 "traddr": "10.0.0.1", 00:46:28.736 "trsvcid": "37672" 00:46:28.736 }, 00:46:28.736 "auth": { 00:46:28.736 "state": "completed", 00:46:28.736 "digest": "sha512", 00:46:28.736 "dhgroup": "ffdhe4096" 00:46:28.736 } 00:46:28.736 } 00:46:28.736 ]' 00:46:28.994 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:28.994 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:28.994 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:28.994 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:46:28.994 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:28.994 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:28.994 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:28.994 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:29.253 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:29.253 09:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:30.188 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:30.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:30.188 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:30.188 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:30.188 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:30.188 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:30.188 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:30.188 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:46:30.188 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:30.447 09:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:31.014 00:46:31.014 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:31.014 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:31.014 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:31.273 { 00:46:31.273 "cntlid": 123, 00:46:31.273 "qid": 0, 00:46:31.273 "state": "enabled", 00:46:31.273 "thread": "nvmf_tgt_poll_group_000", 00:46:31.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:31.273 "listen_address": { 00:46:31.273 "trtype": "TCP", 00:46:31.273 "adrfam": "IPv4", 00:46:31.273 "traddr": "10.0.0.3", 00:46:31.273 "trsvcid": "4420" 00:46:31.273 }, 00:46:31.273 "peer_address": { 00:46:31.273 "trtype": "TCP", 00:46:31.273 "adrfam": "IPv4", 00:46:31.273 "traddr": "10.0.0.1", 00:46:31.273 "trsvcid": "50796" 00:46:31.273 }, 00:46:31.273 "auth": { 00:46:31.273 "state": "completed", 00:46:31.273 "digest": "sha512", 00:46:31.273 "dhgroup": "ffdhe4096" 00:46:31.273 } 00:46:31.273 } 00:46:31.273 ]' 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:31.273 09:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:31.532 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:31.532 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:32.466 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:32.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:32.466 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:32.466 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:32.466 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:32.466 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:32.466 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:32.466 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:46:32.466 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:46:32.724 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:46:32.724 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:32.724 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:32.724 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:46:32.724 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:46:32.724 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:32.724 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:32.724 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:32.724 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:32.724 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:32.724 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:32.724 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:32.724 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:32.983 00:46:32.983 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:32.983 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:32.983 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:33.241 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:33.241 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:33.241 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:33.241 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:33.241 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:33.241 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:33.241 { 00:46:33.241 "cntlid": 125, 00:46:33.241 "qid": 0, 00:46:33.241 "state": "enabled", 00:46:33.241 "thread": "nvmf_tgt_poll_group_000", 00:46:33.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:33.241 "listen_address": { 00:46:33.241 "trtype": "TCP", 00:46:33.241 "adrfam": "IPv4", 00:46:33.241 "traddr": "10.0.0.3", 00:46:33.241 "trsvcid": "4420" 00:46:33.241 }, 00:46:33.241 "peer_address": { 00:46:33.241 "trtype": "TCP", 00:46:33.241 "adrfam": "IPv4", 00:46:33.241 "traddr": "10.0.0.1", 00:46:33.241 "trsvcid": "50816" 00:46:33.241 }, 00:46:33.241 "auth": { 00:46:33.241 "state": "completed", 00:46:33.241 "digest": "sha512", 00:46:33.241 "dhgroup": "ffdhe4096" 00:46:33.241 } 00:46:33.241 } 00:46:33.241 ]' 00:46:33.241 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:33.241 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:33.241 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:33.500 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:46:33.500 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:33.500 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:33.500 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:33.500 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:33.758 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:33.758 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:34.697 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:34.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:34.697 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:34.697 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:34.697 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:34.697 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:34.697 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:34.697 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:46:34.697 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:46:34.697 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:46:34.697 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:34.697 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:34.697 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:46:34.697 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:46:34.697 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:34.697 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:46:34.697 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:34.697 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:34.957 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:34.957 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:46:34.957 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:34.957 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:35.215 00:46:35.215 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:35.215 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:35.215 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:35.474 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:35.474 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:35.474 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.474 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:35.474 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.474 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:35.474 { 00:46:35.474 "cntlid": 127, 00:46:35.474 "qid": 0, 00:46:35.474 "state": "enabled", 00:46:35.474 "thread": "nvmf_tgt_poll_group_000", 00:46:35.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:35.474 "listen_address": { 00:46:35.474 "trtype": "TCP", 00:46:35.474 "adrfam": "IPv4", 00:46:35.474 "traddr": "10.0.0.3", 00:46:35.474 "trsvcid": "4420" 00:46:35.474 }, 00:46:35.474 "peer_address": { 00:46:35.474 "trtype": "TCP", 00:46:35.474 "adrfam": "IPv4", 00:46:35.474 "traddr": "10.0.0.1", 00:46:35.474 "trsvcid": "50848" 00:46:35.474 }, 00:46:35.474 "auth": { 00:46:35.474 "state": "completed", 00:46:35.474 "digest": "sha512", 00:46:35.474 "dhgroup": "ffdhe4096" 00:46:35.474 } 00:46:35.474 } 00:46:35.474 ]' 00:46:35.474 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:35.474 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:35.474 09:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:35.733 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:46:35.733 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:35.733 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:35.733 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:35.733 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:35.992 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:35.992 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:36.927 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:36.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:36.927 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:36.927 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.927 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:36.927 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.927 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:46:36.927 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:36.927 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:46:36.927 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:37.186 09:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:37.754 00:46:37.754 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:37.754 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:37.754 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:38.014 { 00:46:38.014 "cntlid": 129, 00:46:38.014 "qid": 0, 00:46:38.014 "state": "enabled", 00:46:38.014 "thread": "nvmf_tgt_poll_group_000", 00:46:38.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:38.014 "listen_address": { 00:46:38.014 "trtype": "TCP", 00:46:38.014 "adrfam": "IPv4", 00:46:38.014 "traddr": "10.0.0.3", 00:46:38.014 "trsvcid": "4420" 00:46:38.014 }, 00:46:38.014 "peer_address": { 00:46:38.014 "trtype": "TCP", 00:46:38.014 "adrfam": "IPv4", 00:46:38.014 "traddr": "10.0.0.1", 00:46:38.014 "trsvcid": "50870" 00:46:38.014 }, 00:46:38.014 "auth": { 00:46:38.014 "state": "completed", 00:46:38.014 "digest": "sha512", 00:46:38.014 "dhgroup": "ffdhe6144" 00:46:38.014 } 00:46:38.014 } 00:46:38.014 ]' 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:46:38.014 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:38.273 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:38.273 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:38.273 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:38.531 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:38.531 09:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:39.484 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:39.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:39.484 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:39.484 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:39.484 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:39.484 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:39.484 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:39.484 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:46:39.484 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:39.742 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:40.308 00:46:40.308 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:40.308 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:40.308 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:40.567 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:40.567 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:40.567 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:40.567 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:40.567 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:40.567 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:40.567 { 00:46:40.567 "cntlid": 131, 00:46:40.567 "qid": 0, 00:46:40.567 "state": "enabled", 00:46:40.567 "thread": "nvmf_tgt_poll_group_000", 00:46:40.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:40.567 "listen_address": { 00:46:40.567 "trtype": "TCP", 00:46:40.567 "adrfam": "IPv4", 00:46:40.567 "traddr": "10.0.0.3", 00:46:40.567 "trsvcid": "4420" 00:46:40.567 }, 00:46:40.567 "peer_address": { 00:46:40.567 "trtype": "TCP", 00:46:40.567 "adrfam": "IPv4", 00:46:40.567 "traddr": "10.0.0.1", 00:46:40.567 "trsvcid": "58384" 00:46:40.567 }, 00:46:40.567 "auth": { 00:46:40.567 "state": "completed", 00:46:40.567 "digest": "sha512", 00:46:40.567 "dhgroup": "ffdhe6144" 00:46:40.567 } 00:46:40.567 } 00:46:40.567 ]' 00:46:40.567 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:40.567 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:40.567 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:40.826 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:46:40.826 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:40.826 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:40.826 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:40.826 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:41.084 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:41.084 09:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:42.065 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:42.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:42.065 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:42.065 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.065 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:42.065 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.065 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:42.065 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:46:42.065 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:46:42.326 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:46:42.326 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:42.326 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:42.326 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:46:42.326 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:46:42.326 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:42.326 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:42.327 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.327 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:42.327 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.327 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:42.327 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:42.327 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:42.893 00:46:42.893 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:42.893 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:42.893 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:43.152 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:43.152 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:43.152 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:43.152 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:43.152 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:43.152 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:43.152 { 00:46:43.152 "cntlid": 133, 00:46:43.152 "qid": 0, 00:46:43.152 "state": "enabled", 00:46:43.152 "thread": "nvmf_tgt_poll_group_000", 00:46:43.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:43.152 "listen_address": { 00:46:43.152 "trtype": "TCP", 00:46:43.152 "adrfam": "IPv4", 00:46:43.152 "traddr": "10.0.0.3", 00:46:43.152 "trsvcid": "4420" 00:46:43.152 }, 00:46:43.152 "peer_address": { 00:46:43.152 "trtype": "TCP", 00:46:43.152 "adrfam": "IPv4", 00:46:43.152 "traddr": "10.0.0.1", 00:46:43.152 "trsvcid": "58410" 00:46:43.152 }, 00:46:43.152 "auth": { 00:46:43.152 "state": "completed", 00:46:43.152 "digest": "sha512", 00:46:43.152 "dhgroup": "ffdhe6144" 00:46:43.153 } 00:46:43.153 } 00:46:43.153 ]' 00:46:43.153 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:43.153 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:43.153 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:43.153 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:46:43.153 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:43.153 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:43.153 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:43.153 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:43.719 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:43.719 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:44.654 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:44.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:44.654 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:44.654 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:44.654 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:44.654 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:44.654 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:44.654 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:46:44.654 09:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:44.911 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:45.477 00:46:45.477 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:45.477 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:45.477 09:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:45.735 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:45.735 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:45.735 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.735 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:45.735 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.994 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:45.994 { 00:46:45.994 "cntlid": 135, 00:46:45.994 "qid": 0, 00:46:45.994 "state": "enabled", 00:46:45.994 "thread": "nvmf_tgt_poll_group_000", 00:46:45.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:45.994 "listen_address": { 00:46:45.994 "trtype": "TCP", 00:46:45.994 "adrfam": "IPv4", 00:46:45.994 "traddr": "10.0.0.3", 00:46:45.994 "trsvcid": "4420" 00:46:45.994 }, 00:46:45.994 "peer_address": { 00:46:45.994 "trtype": "TCP", 00:46:45.994 "adrfam": "IPv4", 00:46:45.994 "traddr": "10.0.0.1", 00:46:45.994 "trsvcid": "58442" 00:46:45.994 }, 00:46:45.994 "auth": { 00:46:45.994 "state": "completed", 00:46:45.994 "digest": "sha512", 00:46:45.994 "dhgroup": "ffdhe6144" 00:46:45.994 } 00:46:45.994 } 00:46:45.994 ]' 00:46:45.994 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:45.994 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:45.994 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:45.994 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:46:45.994 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:45.994 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:45.994 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:45.994 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:46.573 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:46.573 09:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:47.140 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:47.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:47.140 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:47.140 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.140 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:47.398 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:47.398 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:46:47.398 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:47.398 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:46:47.398 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:47.657 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:48.592 00:46:48.592 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:48.592 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:48.592 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:48.863 { 00:46:48.863 "cntlid": 137, 00:46:48.863 "qid": 0, 00:46:48.863 "state": "enabled", 00:46:48.863 "thread": "nvmf_tgt_poll_group_000", 00:46:48.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:48.863 "listen_address": { 00:46:48.863 "trtype": "TCP", 00:46:48.863 "adrfam": "IPv4", 00:46:48.863 "traddr": "10.0.0.3", 00:46:48.863 "trsvcid": "4420" 00:46:48.863 }, 00:46:48.863 "peer_address": { 00:46:48.863 "trtype": "TCP", 00:46:48.863 "adrfam": "IPv4", 00:46:48.863 "traddr": "10.0.0.1", 00:46:48.863 "trsvcid": "58474" 00:46:48.863 }, 00:46:48.863 "auth": { 00:46:48.863 "state": "completed", 00:46:48.863 "digest": "sha512", 00:46:48.863 "dhgroup": "ffdhe8192" 00:46:48.863 } 00:46:48.863 } 00:46:48.863 ]' 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:46:48.863 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:49.121 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:49.121 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:49.121 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:49.380 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:49.380 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:50.315 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:50.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:50.315 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:50.315 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:50.315 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:50.315 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:50.315 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:50.315 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:46:50.315 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:46:50.315 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:50.574 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:46:51.140 00:46:51.140 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:51.140 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:51.140 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:51.398 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:51.398 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:51.398 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:51.398 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:51.398 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:51.398 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:51.398 { 00:46:51.398 "cntlid": 139, 00:46:51.398 "qid": 0, 00:46:51.398 "state": "enabled", 00:46:51.398 "thread": "nvmf_tgt_poll_group_000", 00:46:51.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:51.398 "listen_address": { 00:46:51.398 "trtype": "TCP", 00:46:51.398 "adrfam": "IPv4", 00:46:51.398 "traddr": "10.0.0.3", 00:46:51.398 "trsvcid": "4420" 00:46:51.398 }, 00:46:51.398 "peer_address": { 00:46:51.398 "trtype": "TCP", 00:46:51.398 "adrfam": "IPv4", 00:46:51.398 "traddr": "10.0.0.1", 00:46:51.398 "trsvcid": "39220" 00:46:51.398 }, 00:46:51.398 "auth": { 00:46:51.398 "state": "completed", 00:46:51.398 "digest": "sha512", 00:46:51.398 "dhgroup": "ffdhe8192" 00:46:51.398 } 00:46:51.398 } 00:46:51.398 ]' 00:46:51.398 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:51.657 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:51.657 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:51.657 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:46:51.657 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:51.657 09:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:51.657 09:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:51.657 09:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:51.917 09:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:51.917 09:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: --dhchap-ctrl-secret DHHC-1:02:MDU5OWMxNDM3OGFlMmYyYTg4MjM4MzUyYmRhNjJhOGE4ZDUyOTA2OWE1MDNkMDI1Mjy+qA==: 00:46:52.853 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:52.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:52.853 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:52.853 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:52.853 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:52.853 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:52.853 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:52.853 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:46:52.853 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:53.112 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:46:53.679 00:46:53.937 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:53.937 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:53.937 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:54.195 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:54.195 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:54.195 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:54.195 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:54.195 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:54.195 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:54.195 { 00:46:54.195 "cntlid": 141, 00:46:54.195 "qid": 0, 00:46:54.195 "state": "enabled", 00:46:54.195 "thread": "nvmf_tgt_poll_group_000", 00:46:54.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:54.196 "listen_address": { 00:46:54.196 "trtype": "TCP", 00:46:54.196 "adrfam": "IPv4", 00:46:54.196 "traddr": "10.0.0.3", 00:46:54.196 "trsvcid": "4420" 00:46:54.196 }, 00:46:54.196 "peer_address": { 00:46:54.196 "trtype": "TCP", 00:46:54.196 "adrfam": "IPv4", 00:46:54.196 "traddr": "10.0.0.1", 00:46:54.196 "trsvcid": "39254" 00:46:54.196 }, 00:46:54.196 "auth": { 00:46:54.196 "state": "completed", 00:46:54.196 "digest": "sha512", 00:46:54.196 "dhgroup": "ffdhe8192" 00:46:54.196 } 00:46:54.196 } 00:46:54.196 ]' 00:46:54.196 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:54.196 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:54.196 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:54.196 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:46:54.196 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:54.196 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:54.196 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:54.196 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:54.763 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:54.763 09:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:01:Y2RlZTI4ZThhNTRmNDYyYzlmYTI5OWE5MTliZDRiODkKLTBe: 00:46:55.374 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:55.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:55.374 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:55.374 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.374 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:55.374 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.374 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:46:55.374 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:46:55.374 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:46:55.632 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:46:55.633 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:55.633 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:55.633 09:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:46:55.633 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:46:55.633 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:55.633 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:46:55.633 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:55.633 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:55.633 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:55.633 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:46:55.633 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:55.633 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:46:56.218 00:46:56.485 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:56.485 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:56.485 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:56.745 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:56.745 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:56.745 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:56.745 09:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:56.745 { 00:46:56.745 "cntlid": 143, 00:46:56.745 "qid": 0, 00:46:56.745 "state": "enabled", 00:46:56.745 "thread": "nvmf_tgt_poll_group_000", 00:46:56.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:56.745 "listen_address": { 00:46:56.745 "trtype": "TCP", 00:46:56.745 "adrfam": "IPv4", 00:46:56.745 "traddr": "10.0.0.3", 00:46:56.745 "trsvcid": "4420" 00:46:56.745 }, 00:46:56.745 "peer_address": { 00:46:56.745 "trtype": "TCP", 00:46:56.745 "adrfam": "IPv4", 00:46:56.745 "traddr": "10.0.0.1", 00:46:56.745 "trsvcid": "39290" 00:46:56.745 }, 00:46:56.745 "auth": { 00:46:56.745 "state": "completed", 00:46:56.745 "digest": "sha512", 00:46:56.745 "dhgroup": "ffdhe8192" 00:46:56.745 } 00:46:56.745 } 00:46:56.745 ]' 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:56.745 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:57.003 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:57.003 09:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:46:57.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:46:57.938 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:58.196 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:46:59.132 00:46:59.132 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:46:59.132 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:46:59.132 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:46:59.132 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:59.132 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:46:59.132 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:59.132 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:46:59.132 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:59.132 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:46:59.132 { 00:46:59.132 "cntlid": 145, 00:46:59.132 "qid": 0, 00:46:59.132 "state": "enabled", 00:46:59.132 "thread": "nvmf_tgt_poll_group_000", 00:46:59.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:46:59.132 "listen_address": { 00:46:59.132 "trtype": "TCP", 00:46:59.132 "adrfam": "IPv4", 00:46:59.132 "traddr": "10.0.0.3", 00:46:59.132 "trsvcid": "4420" 00:46:59.132 }, 00:46:59.132 "peer_address": { 00:46:59.132 "trtype": "TCP", 00:46:59.132 "adrfam": "IPv4", 00:46:59.132 "traddr": "10.0.0.1", 00:46:59.133 "trsvcid": "39314" 00:46:59.133 }, 00:46:59.133 "auth": { 00:46:59.133 "state": "completed", 00:46:59.133 "digest": "sha512", 00:46:59.133 "dhgroup": "ffdhe8192" 00:46:59.133 } 00:46:59.133 } 00:46:59.133 ]' 00:46:59.133 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:46:59.133 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:46:59.133 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:46:59.391 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:46:59.391 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:46:59.392 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:46:59.392 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:46:59.392 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:46:59.651 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:46:59.651 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:00:ZjNlOTQ3ZDhkNTc0MTBlYTBjY2RhODE3YTBlZTcxODgxMWY1OGMyZGQxYmZjYTg2YBezjQ==: --dhchap-ctrl-secret DHHC-1:03:NTUxY2Q0ZjcyZmExNzdmOGQzZTUxNDE2M2IxMjlmMjEyMzVkOWU4MjVlNjI2YWYxOTVkYjUxOGNiZjM4MjBmN1swfos=: 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:47:00.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:47:00.589 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:47:01.158 request: 00:47:01.158 { 00:47:01.158 "name": "nvme0", 00:47:01.158 "trtype": "tcp", 00:47:01.158 "traddr": "10.0.0.3", 00:47:01.158 "adrfam": "ipv4", 00:47:01.158 "trsvcid": "4420", 00:47:01.158 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:47:01.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:47:01.158 "prchk_reftag": false, 00:47:01.158 "prchk_guard": false, 00:47:01.158 "hdgst": false, 00:47:01.158 "ddgst": false, 00:47:01.158 "dhchap_key": "key2", 00:47:01.158 "allow_unrecognized_csi": false, 00:47:01.158 "method": "bdev_nvme_attach_controller", 00:47:01.158 "req_id": 1 00:47:01.158 } 00:47:01.158 Got JSON-RPC error response 00:47:01.158 response: 00:47:01.158 { 00:47:01.158 "code": -5, 00:47:01.158 "message": "Input/output error" 00:47:01.158 } 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:47:01.158 09:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:47:01.772 request: 00:47:01.772 { 00:47:01.772 "name": "nvme0", 00:47:01.772 "trtype": "tcp", 00:47:01.772 "traddr": "10.0.0.3", 00:47:01.772 "adrfam": "ipv4", 00:47:01.772 "trsvcid": "4420", 00:47:01.772 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:47:01.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:47:01.772 "prchk_reftag": false, 00:47:01.772 "prchk_guard": false, 00:47:01.772 "hdgst": false, 00:47:01.772 "ddgst": false, 00:47:01.772 "dhchap_key": "key1", 00:47:01.772 "dhchap_ctrlr_key": "ckey2", 00:47:01.772 "allow_unrecognized_csi": false, 00:47:01.772 "method": "bdev_nvme_attach_controller", 00:47:01.772 "req_id": 1 00:47:01.772 } 00:47:01.772 Got JSON-RPC error response 00:47:01.772 response: 00:47:01.772 { 00:47:01.772 "code": -5, 00:47:01.772 "message": "Input/output error" 00:47:01.772 } 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:01.772 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:02.340 request: 00:47:02.340 { 00:47:02.340 "name": "nvme0", 00:47:02.340 "trtype": "tcp", 00:47:02.340 "traddr": "10.0.0.3", 00:47:02.340 "adrfam": "ipv4", 00:47:02.340 "trsvcid": "4420", 00:47:02.340 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:47:02.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:47:02.340 "prchk_reftag": false, 00:47:02.340 "prchk_guard": false, 00:47:02.340 "hdgst": false, 00:47:02.340 "ddgst": false, 00:47:02.340 "dhchap_key": "key1", 00:47:02.340 "dhchap_ctrlr_key": "ckey1", 00:47:02.340 "allow_unrecognized_csi": false, 00:47:02.340 "method": "bdev_nvme_attach_controller", 00:47:02.340 "req_id": 1 00:47:02.340 } 00:47:02.340 Got JSON-RPC error response 00:47:02.340 response: 00:47:02.340 { 00:47:02.340 "code": -5, 00:47:02.340 "message": "Input/output error" 00:47:02.340 } 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67109 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67109 ']' 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67109 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67109 00:47:02.340 killing process with pid 67109 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67109' 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67109 00:47:02.340 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67109 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70362 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70362 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70362 ']' 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:02.599 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:02.599 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:03.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70362 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70362 ']' 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:03.163 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.420 null0 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Q6x 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.sXb ]] 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sXb 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Q8s 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.420 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.gV0 ]] 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gV0 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.p06 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.4Az ]] 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4Az 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Nip 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:47:03.678 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:47:03.679 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:47:05.062 nvme0n1 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:47:05.062 { 00:47:05.062 "cntlid": 1, 00:47:05.062 "qid": 0, 00:47:05.062 "state": "enabled", 00:47:05.062 "thread": "nvmf_tgt_poll_group_000", 00:47:05.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:47:05.062 "listen_address": { 00:47:05.062 "trtype": "TCP", 00:47:05.062 "adrfam": "IPv4", 00:47:05.062 "traddr": "10.0.0.3", 00:47:05.062 "trsvcid": "4420" 00:47:05.062 }, 00:47:05.062 "peer_address": { 00:47:05.062 "trtype": "TCP", 00:47:05.062 "adrfam": "IPv4", 00:47:05.062 "traddr": "10.0.0.1", 00:47:05.062 "trsvcid": "35134" 00:47:05.062 }, 00:47:05.062 "auth": { 00:47:05.062 "state": "completed", 00:47:05.062 "digest": "sha512", 00:47:05.062 "dhgroup": "ffdhe8192" 00:47:05.062 } 00:47:05.062 } 00:47:05.062 ]' 00:47:05.062 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:47:05.320 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:47:05.320 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:47:05.320 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:47:05.320 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:47:05.320 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:47:05.320 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:47:05.320 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:47:05.578 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:47:05.578 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:47:06.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key3 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:47:06.514 09:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:47:06.796 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:47:07.066 request: 00:47:07.066 { 00:47:07.066 "name": "nvme0", 00:47:07.066 "trtype": "tcp", 00:47:07.066 "traddr": "10.0.0.3", 00:47:07.066 "adrfam": "ipv4", 00:47:07.066 "trsvcid": "4420", 00:47:07.066 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:47:07.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:47:07.066 "prchk_reftag": false, 00:47:07.066 "prchk_guard": false, 00:47:07.066 "hdgst": false, 00:47:07.066 "ddgst": false, 00:47:07.066 "dhchap_key": "key3", 00:47:07.066 "allow_unrecognized_csi": false, 00:47:07.066 "method": "bdev_nvme_attach_controller", 00:47:07.066 "req_id": 1 00:47:07.066 } 00:47:07.066 Got JSON-RPC error response 00:47:07.066 response: 00:47:07.066 { 00:47:07.066 "code": -5, 00:47:07.066 "message": "Input/output error" 00:47:07.066 } 00:47:07.066 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:47:07.066 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:07.066 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:07.066 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:07.066 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:47:07.066 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:47:07.066 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:47:07.066 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:47:07.327 09:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:47:07.586 request: 00:47:07.586 { 00:47:07.586 "name": "nvme0", 00:47:07.586 "trtype": "tcp", 00:47:07.586 "traddr": "10.0.0.3", 00:47:07.586 "adrfam": "ipv4", 00:47:07.586 "trsvcid": "4420", 00:47:07.586 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:47:07.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:47:07.586 "prchk_reftag": false, 00:47:07.586 "prchk_guard": false, 00:47:07.586 "hdgst": false, 00:47:07.586 "ddgst": false, 00:47:07.586 "dhchap_key": "key3", 00:47:07.586 "allow_unrecognized_csi": false, 00:47:07.586 "method": "bdev_nvme_attach_controller", 00:47:07.586 "req_id": 1 00:47:07.586 } 00:47:07.586 Got JSON-RPC error response 00:47:07.586 response: 00:47:07.586 { 00:47:07.586 "code": -5, 00:47:07.586 "message": "Input/output error" 00:47:07.586 } 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:47:07.586 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:47:07.850 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:47:08.417 request: 00:47:08.417 { 00:47:08.417 "name": "nvme0", 00:47:08.417 "trtype": "tcp", 00:47:08.417 "traddr": "10.0.0.3", 00:47:08.417 "adrfam": "ipv4", 00:47:08.417 "trsvcid": "4420", 00:47:08.417 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:47:08.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:47:08.417 "prchk_reftag": false, 00:47:08.417 "prchk_guard": false, 00:47:08.417 "hdgst": false, 00:47:08.417 "ddgst": false, 00:47:08.417 "dhchap_key": "key0", 00:47:08.417 "dhchap_ctrlr_key": "key1", 00:47:08.417 "allow_unrecognized_csi": false, 00:47:08.417 "method": "bdev_nvme_attach_controller", 00:47:08.417 "req_id": 1 00:47:08.417 } 00:47:08.417 Got JSON-RPC error response 00:47:08.417 response: 00:47:08.417 { 00:47:08.417 "code": -5, 00:47:08.417 "message": "Input/output error" 00:47:08.417 } 00:47:08.417 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:47:08.417 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:08.417 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:08.417 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:08.417 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:47:08.417 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:47:08.417 09:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:47:08.675 nvme0n1 00:47:08.932 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:47:08.933 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:08.933 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:47:09.190 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:09.190 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:47:09.190 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:47:09.448 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 00:47:09.448 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:09.448 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:09.448 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:09.448 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:47:09.448 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:47:09.448 09:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:47:10.383 nvme0n1 00:47:10.383 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:47:10.383 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:10.383 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:47:10.642 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:10.642 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key key3 00:47:10.642 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:10.642 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:10.642 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:10.642 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:47:10.642 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:47:10.642 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:10.900 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:10.900 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:47:10.900 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid eacce601-ab17-4447-87df-663b0f4f5455 -l 0 --dhchap-secret DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: --dhchap-ctrl-secret DHHC-1:03:NGRkZDhmNjk0NGQ2OTg1YmZiYzAzMTlhNmUwNDYyMDQ4MGIxZjY2MTI1Nzc1YTZkZWJhYzI5NGI5M2M0NjE3Nnu2I0o=: 00:47:11.835 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:47:11.835 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:47:11.835 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:47:11.835 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:47:11.835 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:47:11.835 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:47:11.835 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:47:11.835 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:47:11.835 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:47:12.094 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:47:12.660 request: 00:47:12.660 { 00:47:12.660 "name": "nvme0", 00:47:12.660 "trtype": "tcp", 00:47:12.660 "traddr": "10.0.0.3", 00:47:12.660 "adrfam": "ipv4", 00:47:12.660 "trsvcid": "4420", 00:47:12.660 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:47:12.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455", 00:47:12.660 "prchk_reftag": false, 00:47:12.660 "prchk_guard": false, 00:47:12.660 "hdgst": false, 00:47:12.660 "ddgst": false, 00:47:12.660 "dhchap_key": "key1", 00:47:12.660 "allow_unrecognized_csi": false, 00:47:12.660 "method": "bdev_nvme_attach_controller", 00:47:12.660 "req_id": 1 00:47:12.660 } 00:47:12.660 Got JSON-RPC error response 00:47:12.660 response: 00:47:12.660 { 00:47:12.660 "code": -5, 00:47:12.660 "message": "Input/output error" 00:47:12.660 } 00:47:12.660 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:47:12.660 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:12.660 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:12.660 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:12.660 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:47:12.660 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:47:12.661 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:47:14.036 nvme0n1 00:47:14.036 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:47:14.036 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:14.036 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:47:14.296 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:14.296 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:47:14.296 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:47:14.556 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:14.556 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:14.556 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:14.556 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:14.556 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:47:14.556 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:47:14.556 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:47:14.815 nvme0n1 00:47:14.815 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:47:14.815 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:47:14.815 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:15.382 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:15.382 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:47:15.382 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key key3 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: '' 2s 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: ]] 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTM0ZDQxZDgxOGMzYWVmMzcyN2EwMTYyY2Q0YWI2N2YXMJng: 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:47:15.641 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key1 --dhchap-ctrlr-key key2 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: 2s 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: ]] 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YmFiN2QxZWUxM2Y0NGU5ODdiMGEwOGQxNDQxMGY2MDQ5MTdiNDZlMTU3MzRjZTg5fGSNbA==: 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:47:17.544 09:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:47:20.075 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:47:20.075 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:47:20.075 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:47:20.075 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:47:20.075 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:47:20.075 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:47:20.075 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:47:20.075 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:47:20.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:47:20.075 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key key1 00:47:20.075 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:20.075 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:20.075 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:20.075 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:47:20.075 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:47:20.075 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:47:20.689 nvme0n1 00:47:20.689 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key key3 00:47:20.689 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:20.689 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:20.689 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:20.689 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:47:20.689 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:47:21.256 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:47:21.256 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:21.256 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:47:21.823 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:21.823 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:21.823 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:21.823 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:21.823 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:21.823 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:47:21.823 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:47:22.081 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:47:22.081 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:22.082 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key key3 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:47:22.340 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:47:22.906 request: 00:47:22.906 { 00:47:22.906 "name": "nvme0", 00:47:22.906 "dhchap_key": "key1", 00:47:22.906 "dhchap_ctrlr_key": "key3", 00:47:22.906 "method": "bdev_nvme_set_keys", 00:47:22.906 "req_id": 1 00:47:22.906 } 00:47:22.906 Got JSON-RPC error response 00:47:22.906 response: 00:47:22.906 { 00:47:22.906 "code": -13, 00:47:22.906 "message": "Permission denied" 00:47:22.906 } 00:47:22.906 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:47:22.906 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:22.906 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:22.906 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:22.906 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:47:22.906 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:47:22.906 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:23.474 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:47:23.474 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:47:24.410 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:47:24.410 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:47:24.410 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:24.668 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:47:24.669 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key0 --dhchap-ctrlr-key key1 00:47:24.669 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:24.669 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:24.669 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:24.669 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:47:24.669 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:47:24.669 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:47:25.605 nvme0n1 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --dhchap-key key2 --dhchap-ctrlr-key key3 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:47:25.605 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:47:26.555 request: 00:47:26.555 { 00:47:26.555 "name": "nvme0", 00:47:26.555 "dhchap_key": "key2", 00:47:26.555 "dhchap_ctrlr_key": "key0", 00:47:26.555 "method": "bdev_nvme_set_keys", 00:47:26.555 "req_id": 1 00:47:26.555 } 00:47:26.555 Got JSON-RPC error response 00:47:26.555 response: 00:47:26.555 { 00:47:26.555 "code": -13, 00:47:26.555 "message": "Permission denied" 00:47:26.555 } 00:47:26.555 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:47:26.555 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:26.555 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:26.555 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:26.555 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:47:26.555 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:26.555 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:47:26.813 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:47:26.813 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:47:27.749 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:47:27.749 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:47:27.749 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67128 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67128 ']' 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67128 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67128 00:47:28.008 killing process with pid 67128 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67128' 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67128 00:47:28.008 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67128 00:47:28.266 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:47:28.266 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:28.266 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:47:28.266 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:28.266 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:47:28.266 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:28.266 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:28.266 rmmod nvme_tcp 00:47:28.525 rmmod nvme_fabrics 00:47:28.525 rmmod nvme_keyring 00:47:28.525 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:28.525 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:47:28.525 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:47:28.525 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70362 ']' 00:47:28.525 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70362 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70362 ']' 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70362 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70362 00:47:28.526 killing process with pid 70362 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70362' 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70362 00:47:28.526 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70362 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:47:28.785 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Q6x /tmp/spdk.key-sha256.Q8s /tmp/spdk.key-sha384.p06 /tmp/spdk.key-sha512.Nip /tmp/spdk.key-sha512.sXb /tmp/spdk.key-sha384.gV0 /tmp/spdk.key-sha256.4Az '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:47:29.044 ************************************ 00:47:29.044 END TEST nvmf_auth_target 00:47:29.044 ************************************ 00:47:29.044 00:47:29.044 real 3m29.575s 00:47:29.044 user 8m24.340s 00:47:29.044 sys 0m30.340s 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:29.044 ************************************ 00:47:29.044 START TEST nvmf_bdevio_no_huge 00:47:29.044 ************************************ 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:47:29.044 * Looking for test storage... 00:47:29.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:29.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.044 --rc genhtml_branch_coverage=1 00:47:29.044 --rc genhtml_function_coverage=1 00:47:29.044 --rc genhtml_legend=1 00:47:29.044 --rc geninfo_all_blocks=1 00:47:29.044 --rc geninfo_unexecuted_blocks=1 00:47:29.044 00:47:29.044 ' 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:29.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.044 --rc genhtml_branch_coverage=1 00:47:29.044 --rc genhtml_function_coverage=1 00:47:29.044 --rc genhtml_legend=1 00:47:29.044 --rc geninfo_all_blocks=1 00:47:29.044 --rc geninfo_unexecuted_blocks=1 00:47:29.044 00:47:29.044 ' 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:29.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.044 --rc genhtml_branch_coverage=1 00:47:29.044 --rc genhtml_function_coverage=1 00:47:29.044 --rc genhtml_legend=1 00:47:29.044 --rc geninfo_all_blocks=1 00:47:29.044 --rc geninfo_unexecuted_blocks=1 00:47:29.044 00:47:29.044 ' 00:47:29.044 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:29.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.044 --rc genhtml_branch_coverage=1 00:47:29.044 --rc genhtml_function_coverage=1 00:47:29.044 --rc genhtml_legend=1 00:47:29.044 --rc geninfo_all_blocks=1 00:47:29.044 --rc geninfo_unexecuted_blocks=1 00:47:29.044 00:47:29.045 ' 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:29.045 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.303 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:29.304 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:29.304 Cannot find device "nvmf_init_br" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:29.304 Cannot find device "nvmf_init_br2" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:29.304 Cannot find device "nvmf_tgt_br" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:29.304 Cannot find device "nvmf_tgt_br2" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:29.304 Cannot find device "nvmf_init_br" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:29.304 Cannot find device "nvmf_init_br2" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:29.304 Cannot find device "nvmf_tgt_br" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:29.304 Cannot find device "nvmf_tgt_br2" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:29.304 Cannot find device "nvmf_br" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:29.304 Cannot find device "nvmf_init_if" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:29.304 Cannot find device "nvmf_init_if2" 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:29.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:29.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:29.304 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:29.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:29.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:47:29.563 00:47:29.563 --- 10.0.0.3 ping statistics --- 00:47:29.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:29.563 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:29.563 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:29.563 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:47:29.563 00:47:29.563 --- 10.0.0.4 ping statistics --- 00:47:29.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:29.563 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:29.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:29.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:47:29.563 00:47:29.563 --- 10.0.0.1 ping statistics --- 00:47:29.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:29.563 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:29.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:29.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:47:29.563 00:47:29.563 --- 10.0.0.2 ping statistics --- 00:47:29.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:29.563 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71014 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71014 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71014 ']' 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:29.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:29.563 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:29.563 [2024-12-09 09:59:37.023722] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:47:29.563 [2024-12-09 09:59:37.023822] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:47:29.821 [2024-12-09 09:59:37.189555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:29.821 [2024-12-09 09:59:37.300901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:29.821 [2024-12-09 09:59:37.301003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:29.821 [2024-12-09 09:59:37.301029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:29.821 [2024-12-09 09:59:37.301045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:29.821 [2024-12-09 09:59:37.301058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:29.821 [2024-12-09 09:59:37.302086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:47:29.821 [2024-12-09 09:59:37.302212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:47:29.821 [2024-12-09 09:59:37.302307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:47:29.821 [2024-12-09 09:59:37.302317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:29.822 [2024-12-09 09:59:37.310038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:30.080 [2024-12-09 09:59:37.497251] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:30.080 Malloc0 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:30.080 [2024-12-09 09:59:37.537696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:30.080 { 00:47:30.080 "params": { 00:47:30.080 "name": "Nvme$subsystem", 00:47:30.080 "trtype": "$TEST_TRANSPORT", 00:47:30.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:30.080 "adrfam": "ipv4", 00:47:30.080 "trsvcid": "$NVMF_PORT", 00:47:30.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:30.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:30.080 "hdgst": ${hdgst:-false}, 00:47:30.080 "ddgst": ${ddgst:-false} 00:47:30.080 }, 00:47:30.080 "method": "bdev_nvme_attach_controller" 00:47:30.080 } 00:47:30.080 EOF 00:47:30.080 )") 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:47:30.080 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:47:30.080 "params": { 00:47:30.080 "name": "Nvme1", 00:47:30.080 "trtype": "tcp", 00:47:30.080 "traddr": "10.0.0.3", 00:47:30.080 "adrfam": "ipv4", 00:47:30.080 "trsvcid": "4420", 00:47:30.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:30.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:47:30.080 "hdgst": false, 00:47:30.080 "ddgst": false 00:47:30.080 }, 00:47:30.080 "method": "bdev_nvme_attach_controller" 00:47:30.080 }' 00:47:30.339 [2024-12-09 09:59:37.597278] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:47:30.339 [2024-12-09 09:59:37.597370] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71044 ] 00:47:30.339 [2024-12-09 09:59:37.757203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:30.339 [2024-12-09 09:59:37.829712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:30.339 [2024-12-09 09:59:37.829828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:30.339 [2024-12-09 09:59:37.829834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:30.597 [2024-12-09 09:59:37.844690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:30.597 I/O targets: 00:47:30.597 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:47:30.597 00:47:30.597 00:47:30.597 CUnit - A unit testing framework for C - Version 2.1-3 00:47:30.597 http://cunit.sourceforge.net/ 00:47:30.597 00:47:30.597 00:47:30.597 Suite: bdevio tests on: Nvme1n1 00:47:30.597 Test: blockdev write read block ...passed 00:47:30.597 Test: blockdev write zeroes read block ...passed 00:47:30.597 Test: blockdev write zeroes read no split ...passed 00:47:30.598 Test: blockdev write zeroes read split ...passed 00:47:30.598 Test: blockdev write zeroes read split partial ...passed 00:47:30.598 Test: blockdev reset ...[2024-12-09 09:59:38.081499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:47:30.598 [2024-12-09 09:59:38.081616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e7e90 (9): Bad file descriptor 00:47:30.857 passed 00:47:30.857 Test: blockdev write read 8 blocks ...passed 00:47:30.857 Test: blockdev write read size > 128k ...passed 00:47:30.857 Test: blockdev write read invalid size ...passed 00:47:30.857 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:47:30.857 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:47:30.857 Test: blockdev write read max offset ...passed 00:47:30.857 Test: blockdev write read 2 blocks on overlapped address offset ...[2024-12-09 09:59:38.100108] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:47:30.857 passed 00:47:30.857 Test: blockdev writev readv 8 blocks ...passed 00:47:30.857 Test: blockdev writev readv 30 x 1block ...passed 00:47:30.857 Test: blockdev writev readv block ...passed 00:47:30.857 Test: blockdev writev readv size > 128k ...passed 00:47:30.857 Test: blockdev writev readv size > 128k in two iovs ...passed 00:47:30.857 Test: blockdev comparev and writev ...[2024-12-09 09:59:38.108209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:30.857 [2024-12-09 09:59:38.108257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.108283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:30.857 [2024-12-09 09:59:38.108296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.108695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:30.857 [2024-12-09 09:59:38.108729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.108751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:30.857 [2024-12-09 09:59:38.108763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.109142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:30.857 [2024-12-09 09:59:38.109175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.109197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:30.857 [2024-12-09 09:59:38.109210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.109618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:30.857 [2024-12-09 09:59:38.109651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.109673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:30.857 [2024-12-09 09:59:38.109685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:47:30.857 passed 00:47:30.857 Test: blockdev nvme passthru rw ...passed 00:47:30.857 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:59:38.110750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:30.857 [2024-12-09 09:59:38.110782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.110928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:30.857 [2024-12-09 09:59:38.110947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.111095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:30.857 [2024-12-09 09:59:38.111116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:47:30.857 [2024-12-09 09:59:38.111245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:30.857 [2024-12-09 09:59:38.111270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:47:30.857 passed 00:47:30.857 Test: blockdev nvme admin passthru ...passed 00:47:30.857 Test: blockdev copy ...passed 00:47:30.857 00:47:30.857 Run Summary: Type Total Ran Passed Failed Inactive 00:47:30.857 suites 1 1 n/a 0 0 00:47:30.857 tests 23 23 23 0 0 00:47:30.857 asserts 152 152 152 0 n/a 00:47:30.857 00:47:30.857 Elapsed time = 0.159 seconds 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:31.116 rmmod nvme_tcp 00:47:31.116 rmmod nvme_fabrics 00:47:31.116 rmmod nvme_keyring 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71014 ']' 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71014 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71014 ']' 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71014 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71014 00:47:31.116 killing process with pid 71014 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71014' 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71014 00:47:31.116 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71014 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:31.683 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:31.683 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:47:31.942 00:47:31.942 real 0m2.886s 00:47:31.942 user 0m7.802s 00:47:31.942 sys 0m1.335s 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:47:31.942 ************************************ 00:47:31.942 END TEST nvmf_bdevio_no_huge 00:47:31.942 ************************************ 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:31.942 ************************************ 00:47:31.942 START TEST nvmf_tls 00:47:31.942 ************************************ 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:47:31.942 * Looking for test storage... 00:47:31.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:47:31.942 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:31.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:31.943 --rc genhtml_branch_coverage=1 00:47:31.943 --rc genhtml_function_coverage=1 00:47:31.943 --rc genhtml_legend=1 00:47:31.943 --rc geninfo_all_blocks=1 00:47:31.943 --rc geninfo_unexecuted_blocks=1 00:47:31.943 00:47:31.943 ' 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:31.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:31.943 --rc genhtml_branch_coverage=1 00:47:31.943 --rc genhtml_function_coverage=1 00:47:31.943 --rc genhtml_legend=1 00:47:31.943 --rc geninfo_all_blocks=1 00:47:31.943 --rc geninfo_unexecuted_blocks=1 00:47:31.943 00:47:31.943 ' 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:31.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:31.943 --rc genhtml_branch_coverage=1 00:47:31.943 --rc genhtml_function_coverage=1 00:47:31.943 --rc genhtml_legend=1 00:47:31.943 --rc geninfo_all_blocks=1 00:47:31.943 --rc geninfo_unexecuted_blocks=1 00:47:31.943 00:47:31.943 ' 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:31.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:31.943 --rc genhtml_branch_coverage=1 00:47:31.943 --rc genhtml_function_coverage=1 00:47:31.943 --rc genhtml_legend=1 00:47:31.943 --rc geninfo_all_blocks=1 00:47:31.943 --rc geninfo_unexecuted_blocks=1 00:47:31.943 00:47:31.943 ' 00:47:31.943 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:32.203 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:32.203 Cannot find device "nvmf_init_br" 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:32.203 Cannot find device "nvmf_init_br2" 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:32.203 Cannot find device "nvmf_tgt_br" 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:32.203 Cannot find device "nvmf_tgt_br2" 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:32.203 Cannot find device "nvmf_init_br" 00:47:32.203 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:32.204 Cannot find device "nvmf_init_br2" 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:32.204 Cannot find device "nvmf_tgt_br" 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:32.204 Cannot find device "nvmf_tgt_br2" 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:32.204 Cannot find device "nvmf_br" 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:32.204 Cannot find device "nvmf_init_if" 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:32.204 Cannot find device "nvmf_init_if2" 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:32.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:32.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:32.204 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:32.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:32.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:47:32.464 00:47:32.464 --- 10.0.0.3 ping statistics --- 00:47:32.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:32.464 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:32.464 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:32.464 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:47:32.464 00:47:32.464 --- 10.0.0.4 ping statistics --- 00:47:32.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:32.464 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:32.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:32.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:47:32.464 00:47:32.464 --- 10.0.0.1 ping statistics --- 00:47:32.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:32.464 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:32.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:32.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:47:32.464 00:47:32.464 --- 10.0.0.2 ping statistics --- 00:47:32.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:32.464 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71275 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71275 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71275 ']' 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:32.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:32.464 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:32.464 [2024-12-09 09:59:39.936464] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:47:32.464 [2024-12-09 09:59:39.936559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:32.723 [2024-12-09 09:59:40.089341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:32.723 [2024-12-09 09:59:40.121483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:32.723 [2024-12-09 09:59:40.121551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:32.723 [2024-12-09 09:59:40.121562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:32.723 [2024-12-09 09:59:40.121570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:32.723 [2024-12-09 09:59:40.121577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:32.723 [2024-12-09 09:59:40.121874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:32.981 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:32.981 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:47:32.981 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:32.981 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:32.981 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:32.981 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:32.981 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:47:32.981 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:47:33.241 true 00:47:33.241 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:47:33.241 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:47:33.499 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:47:33.499 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:47:33.499 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:47:33.757 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:47:33.757 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:47:34.015 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:47:34.015 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:47:34.015 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:47:34.273 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:47:34.273 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:47:34.532 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:47:34.532 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:47:34.532 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:47:34.532 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:47:34.791 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:47:34.791 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:47:34.791 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:47:35.049 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:47:35.049 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:47:35.331 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:47:35.331 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:47:35.331 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:47:35.607 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:47:35.607 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.SUzUbsDggz 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.8BdsT2G27Q 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.SUzUbsDggz 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.8BdsT2G27Q 00:47:36.174 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:47:36.433 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:47:36.692 [2024-12-09 09:59:44.165438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:36.950 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.SUzUbsDggz 00:47:36.950 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SUzUbsDggz 00:47:36.950 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:47:37.209 [2024-12-09 09:59:44.482096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:37.209 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:47:37.467 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:47:37.725 [2024-12-09 09:59:45.054276] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:37.725 [2024-12-09 09:59:45.054504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:37.725 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:47:37.983 malloc0 00:47:37.983 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:47:38.241 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SUzUbsDggz 00:47:38.500 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:47:38.758 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.SUzUbsDggz 00:47:50.958 Initializing NVMe Controllers 00:47:50.958 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:47:50.958 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:47:50.958 Initialization complete. Launching workers. 00:47:50.958 ======================================================== 00:47:50.958 Latency(us) 00:47:50.958 Device Information : IOPS MiB/s Average min max 00:47:50.958 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9365.27 36.58 6835.37 1437.23 13420.67 00:47:50.958 ======================================================== 00:47:50.958 Total : 9365.27 36.58 6835.37 1437.23 13420.67 00:47:50.958 00:47:50.958 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SUzUbsDggz 00:47:50.958 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:47:50.958 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:47:50.958 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:47:50.958 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SUzUbsDggz 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71511 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71511 /var/tmp/bdevperf.sock 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71511 ']' 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:50.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:50.959 [2024-12-09 09:59:56.453283] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:47:50.959 [2024-12-09 09:59:56.453406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71511 ] 00:47:50.959 [2024-12-09 09:59:56.612675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:50.959 [2024-12-09 09:59:56.653007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:50.959 [2024-12-09 09:59:56.686360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:47:50.959 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SUzUbsDggz 00:47:50.959 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:47:50.959 [2024-12-09 09:59:57.405244] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:50.959 TLSTESTn1 00:47:50.959 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:47:50.959 Running I/O for 10 seconds... 00:47:52.152 3968.00 IOPS, 15.50 MiB/s [2024-12-09T10:00:01.079Z] 4030.00 IOPS, 15.74 MiB/s [2024-12-09T10:00:01.668Z] 4049.00 IOPS, 15.82 MiB/s [2024-12-09T10:00:03.044Z] 4050.25 IOPS, 15.82 MiB/s [2024-12-09T10:00:03.980Z] 4045.60 IOPS, 15.80 MiB/s [2024-12-09T10:00:04.916Z] 4048.50 IOPS, 15.81 MiB/s [2024-12-09T10:00:05.852Z] 4053.29 IOPS, 15.83 MiB/s [2024-12-09T10:00:06.787Z] 4053.12 IOPS, 15.83 MiB/s [2024-12-09T10:00:07.723Z] 4045.78 IOPS, 15.80 MiB/s [2024-12-09T10:00:07.723Z] 4045.30 IOPS, 15.80 MiB/s 00:48:00.221 Latency(us) 00:48:00.221 [2024-12-09T10:00:07.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:00.221 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:48:00.221 Verification LBA range: start 0x0 length 0x2000 00:48:00.221 TLSTESTn1 : 10.02 4050.91 15.82 0.00 0.00 31538.51 6464.23 23116.33 00:48:00.221 [2024-12-09T10:00:07.723Z] =================================================================================================================== 00:48:00.221 [2024-12-09T10:00:07.723Z] Total : 4050.91 15.82 0.00 0.00 31538.51 6464.23 23116.33 00:48:00.221 { 00:48:00.221 "results": [ 00:48:00.221 { 00:48:00.221 "job": "TLSTESTn1", 00:48:00.221 "core_mask": "0x4", 00:48:00.221 "workload": "verify", 00:48:00.221 "status": "finished", 00:48:00.221 "verify_range": { 00:48:00.221 "start": 0, 00:48:00.221 "length": 8192 00:48:00.221 }, 00:48:00.221 "queue_depth": 128, 00:48:00.221 "io_size": 4096, 00:48:00.221 "runtime": 10.017254, 00:48:00.221 "iops": 4050.9105589216365, 00:48:00.221 "mibps": 15.823869370787643, 00:48:00.221 "io_failed": 0, 00:48:00.221 "io_timeout": 0, 00:48:00.221 "avg_latency_us": 31538.505723829385, 00:48:00.221 "min_latency_us": 6464.232727272727, 00:48:00.221 "max_latency_us": 23116.334545454545 00:48:00.221 } 00:48:00.221 ], 00:48:00.221 "core_count": 1 00:48:00.221 } 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71511 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71511 ']' 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71511 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71511 00:48:00.221 killing process with pid 71511 00:48:00.221 Received shutdown signal, test time was about 10.000000 seconds 00:48:00.221 00:48:00.221 Latency(us) 00:48:00.221 [2024-12-09T10:00:07.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:00.221 [2024-12-09T10:00:07.723Z] =================================================================================================================== 00:48:00.221 [2024-12-09T10:00:07.723Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71511' 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71511 00:48:00.221 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71511 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8BdsT2G27Q 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8BdsT2G27Q 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8BdsT2G27Q 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8BdsT2G27Q 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:00.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71644 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71644 /var/tmp/bdevperf.sock 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71644 ']' 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:00.480 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:00.480 [2024-12-09 10:00:07.936511] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:00.480 [2024-12-09 10:00:07.936617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71644 ] 00:48:00.739 [2024-12-09 10:00:08.082698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:00.739 [2024-12-09 10:00:08.115253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:00.739 [2024-12-09 10:00:08.144144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:00.739 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:00.739 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:00.739 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8BdsT2G27Q 00:48:01.032 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:48:01.314 [2024-12-09 10:00:08.751505] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:01.314 [2024-12-09 10:00:08.757250] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:01.314 [2024-12-09 10:00:08.758117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc030 (107): Transport endpoint is not connected 00:48:01.314 [2024-12-09 10:00:08.759107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc030 (9): Bad file descriptor 00:48:01.314 [2024-12-09 10:00:08.760105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:48:01.314 [2024-12-09 10:00:08.760128] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:48:01.314 [2024-12-09 10:00:08.760140] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:48:01.314 [2024-12-09 10:00:08.760155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:48:01.314 request: 00:48:01.314 { 00:48:01.314 "name": "TLSTEST", 00:48:01.314 "trtype": "tcp", 00:48:01.314 "traddr": "10.0.0.3", 00:48:01.314 "adrfam": "ipv4", 00:48:01.314 "trsvcid": "4420", 00:48:01.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:01.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:01.314 "prchk_reftag": false, 00:48:01.314 "prchk_guard": false, 00:48:01.314 "hdgst": false, 00:48:01.314 "ddgst": false, 00:48:01.314 "psk": "key0", 00:48:01.314 "allow_unrecognized_csi": false, 00:48:01.314 "method": "bdev_nvme_attach_controller", 00:48:01.314 "req_id": 1 00:48:01.314 } 00:48:01.314 Got JSON-RPC error response 00:48:01.314 response: 00:48:01.314 { 00:48:01.314 "code": -5, 00:48:01.314 "message": "Input/output error" 00:48:01.314 } 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71644 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71644 ']' 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71644 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71644 00:48:01.314 killing process with pid 71644 00:48:01.314 Received shutdown signal, test time was about 10.000000 seconds 00:48:01.314 00:48:01.314 Latency(us) 00:48:01.314 [2024-12-09T10:00:08.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:01.314 [2024-12-09T10:00:08.816Z] =================================================================================================================== 00:48:01.314 [2024-12-09T10:00:08.816Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71644' 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71644 00:48:01.314 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71644 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SUzUbsDggz 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SUzUbsDggz 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SUzUbsDggz 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SUzUbsDggz 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71665 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71665 /var/tmp/bdevperf.sock 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71665 ']' 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:01.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:01.574 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:01.574 [2024-12-09 10:00:09.051554] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:01.574 [2024-12-09 10:00:09.051667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71665 ] 00:48:01.833 [2024-12-09 10:00:09.203572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:01.833 [2024-12-09 10:00:09.235981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:01.833 [2024-12-09 10:00:09.265072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:01.833 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:01.833 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:01.833 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SUzUbsDggz 00:48:02.400 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:48:02.400 [2024-12-09 10:00:09.848739] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:02.400 [2024-12-09 10:00:09.857295] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:48:02.400 [2024-12-09 10:00:09.857369] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:48:02.400 [2024-12-09 10:00:09.857420] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:02.400 [2024-12-09 10:00:09.858361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c030 (107): Transport endpoint is not connected 00:48:02.400 [2024-12-09 10:00:09.859352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c030 (9): Bad file descriptor 00:48:02.400 [2024-12-09 10:00:09.860349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:48:02.400 [2024-12-09 10:00:09.860375] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:48:02.400 [2024-12-09 10:00:09.860386] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:48:02.400 [2024-12-09 10:00:09.860400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:48:02.400 request: 00:48:02.400 { 00:48:02.400 "name": "TLSTEST", 00:48:02.400 "trtype": "tcp", 00:48:02.400 "traddr": "10.0.0.3", 00:48:02.400 "adrfam": "ipv4", 00:48:02.400 "trsvcid": "4420", 00:48:02.400 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:02.400 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:48:02.400 "prchk_reftag": false, 00:48:02.400 "prchk_guard": false, 00:48:02.400 "hdgst": false, 00:48:02.400 "ddgst": false, 00:48:02.400 "psk": "key0", 00:48:02.400 "allow_unrecognized_csi": false, 00:48:02.400 "method": "bdev_nvme_attach_controller", 00:48:02.400 "req_id": 1 00:48:02.400 } 00:48:02.400 Got JSON-RPC error response 00:48:02.400 response: 00:48:02.400 { 00:48:02.400 "code": -5, 00:48:02.400 "message": "Input/output error" 00:48:02.400 } 00:48:02.400 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71665 00:48:02.400 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71665 ']' 00:48:02.400 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71665 00:48:02.400 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:02.400 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:02.400 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71665 00:48:02.659 killing process with pid 71665 00:48:02.659 Received shutdown signal, test time was about 10.000000 seconds 00:48:02.659 00:48:02.659 Latency(us) 00:48:02.659 [2024-12-09T10:00:10.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:02.659 [2024-12-09T10:00:10.161Z] =================================================================================================================== 00:48:02.659 [2024-12-09T10:00:10.161Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:02.659 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:02.659 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:02.659 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71665' 00:48:02.659 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71665 00:48:02.659 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71665 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SUzUbsDggz 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SUzUbsDggz 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SUzUbsDggz 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SUzUbsDggz 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71686 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71686 /var/tmp/bdevperf.sock 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71686 ']' 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:02.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:02.659 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:02.660 [2024-12-09 10:00:10.154299] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:02.660 [2024-12-09 10:00:10.154450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71686 ] 00:48:02.918 [2024-12-09 10:00:10.309267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:02.918 [2024-12-09 10:00:10.341842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:02.918 [2024-12-09 10:00:10.371164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:03.854 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:03.854 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:03.854 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SUzUbsDggz 00:48:04.113 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:48:04.372 [2024-12-09 10:00:11.722945] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:04.372 [2024-12-09 10:00:11.728111] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:48:04.372 [2024-12-09 10:00:11.728156] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:48:04.372 [2024-12-09 10:00:11.728210] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:04.372 [2024-12-09 10:00:11.728757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1030 (107): Transport endpoint is not connected 00:48:04.372 [2024-12-09 10:00:11.729761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1030 (9): Bad file descriptor 00:48:04.372 [2024-12-09 10:00:11.730758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:48:04.372 [2024-12-09 10:00:11.730798] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:48:04.372 [2024-12-09 10:00:11.730821] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:48:04.372 [2024-12-09 10:00:11.730849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:48:04.372 request: 00:48:04.372 { 00:48:04.372 "name": "TLSTEST", 00:48:04.372 "trtype": "tcp", 00:48:04.372 "traddr": "10.0.0.3", 00:48:04.372 "adrfam": "ipv4", 00:48:04.372 "trsvcid": "4420", 00:48:04.372 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:48:04.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:04.372 "prchk_reftag": false, 00:48:04.372 "prchk_guard": false, 00:48:04.372 "hdgst": false, 00:48:04.372 "ddgst": false, 00:48:04.372 "psk": "key0", 00:48:04.372 "allow_unrecognized_csi": false, 00:48:04.372 "method": "bdev_nvme_attach_controller", 00:48:04.372 "req_id": 1 00:48:04.372 } 00:48:04.372 Got JSON-RPC error response 00:48:04.372 response: 00:48:04.372 { 00:48:04.372 "code": -5, 00:48:04.372 "message": "Input/output error" 00:48:04.372 } 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71686 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71686 ']' 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71686 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71686 00:48:04.372 killing process with pid 71686 00:48:04.372 Received shutdown signal, test time was about 10.000000 seconds 00:48:04.372 00:48:04.372 Latency(us) 00:48:04.372 [2024-12-09T10:00:11.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:04.372 [2024-12-09T10:00:11.874Z] =================================================================================================================== 00:48:04.372 [2024-12-09T10:00:11.874Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71686' 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71686 00:48:04.372 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71686 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71715 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71715 /var/tmp/bdevperf.sock 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71715 ']' 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:04.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:04.631 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:04.631 [2024-12-09 10:00:12.016079] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:04.631 [2024-12-09 10:00:12.016164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:48:04.890 [2024-12-09 10:00:12.161263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:04.890 [2024-12-09 10:00:12.193902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:04.890 [2024-12-09 10:00:12.222593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:04.890 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:04.890 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:04.890 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:48:05.149 [2024-12-09 10:00:12.557980] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:48:05.149 [2024-12-09 10:00:12.558041] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:48:05.149 request: 00:48:05.149 { 00:48:05.149 "name": "key0", 00:48:05.149 "path": "", 00:48:05.149 "method": "keyring_file_add_key", 00:48:05.149 "req_id": 1 00:48:05.149 } 00:48:05.149 Got JSON-RPC error response 00:48:05.149 response: 00:48:05.149 { 00:48:05.149 "code": -1, 00:48:05.149 "message": "Operation not permitted" 00:48:05.149 } 00:48:05.149 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:48:05.409 [2024-12-09 10:00:12.826157] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:05.409 [2024-12-09 10:00:12.826225] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:48:05.409 request: 00:48:05.409 { 00:48:05.409 "name": "TLSTEST", 00:48:05.409 "trtype": "tcp", 00:48:05.409 "traddr": "10.0.0.3", 00:48:05.409 "adrfam": "ipv4", 00:48:05.409 "trsvcid": "4420", 00:48:05.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:05.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:05.409 "prchk_reftag": false, 00:48:05.409 "prchk_guard": false, 00:48:05.409 "hdgst": false, 00:48:05.409 "ddgst": false, 00:48:05.409 "psk": "key0", 00:48:05.409 "allow_unrecognized_csi": false, 00:48:05.409 "method": "bdev_nvme_attach_controller", 00:48:05.409 "req_id": 1 00:48:05.409 } 00:48:05.409 Got JSON-RPC error response 00:48:05.409 response: 00:48:05.409 { 00:48:05.409 "code": -126, 00:48:05.409 "message": "Required key not available" 00:48:05.409 } 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71715 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71715 ']' 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71715 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71715 00:48:05.409 killing process with pid 71715 00:48:05.409 Received shutdown signal, test time was about 10.000000 seconds 00:48:05.409 00:48:05.409 Latency(us) 00:48:05.409 [2024-12-09T10:00:12.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:05.409 [2024-12-09T10:00:12.911Z] =================================================================================================================== 00:48:05.409 [2024-12-09T10:00:12.911Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71715' 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71715 00:48:05.409 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71715 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71275 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71275 ']' 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71275 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71275 00:48:05.669 killing process with pid 71275 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71275' 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71275 00:48:05.669 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71275 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.pnDhFlvg6S 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.pnDhFlvg6S 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71750 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71750 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71750 ']' 00:48:05.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:05.928 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:05.928 [2024-12-09 10:00:13.420885] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:05.928 [2024-12-09 10:00:13.421056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:06.187 [2024-12-09 10:00:13.581144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:06.187 [2024-12-09 10:00:13.611667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:06.187 [2024-12-09 10:00:13.611735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:06.187 [2024-12-09 10:00:13.611762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:06.187 [2024-12-09 10:00:13.611770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:06.187 [2024-12-09 10:00:13.611776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:06.187 [2024-12-09 10:00:13.612134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:06.187 [2024-12-09 10:00:13.645453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:07.123 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:07.123 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:07.123 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:07.123 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:07.123 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:07.123 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:07.123 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.pnDhFlvg6S 00:48:07.123 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pnDhFlvg6S 00:48:07.123 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:48:07.382 [2024-12-09 10:00:14.683276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:07.383 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:48:07.641 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:48:07.899 [2024-12-09 10:00:15.231406] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:07.899 [2024-12-09 10:00:15.231671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:07.899 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:48:08.157 malloc0 00:48:08.157 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:48:08.416 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pnDhFlvg6S 00:48:08.674 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pnDhFlvg6S 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pnDhFlvg6S 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71807 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71807 /var/tmp/bdevperf.sock 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71807 ']' 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:08.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:08.933 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:08.933 [2024-12-09 10:00:16.403904] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:08.933 [2024-12-09 10:00:16.404058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71807 ] 00:48:09.210 [2024-12-09 10:00:16.548475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:09.210 [2024-12-09 10:00:16.581553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:09.210 [2024-12-09 10:00:16.611484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:09.210 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:09.210 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:09.210 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnDhFlvg6S 00:48:09.492 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:48:09.751 [2024-12-09 10:00:17.231831] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:10.009 TLSTESTn1 00:48:10.009 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:48:10.009 Running I/O for 10 seconds... 00:48:12.319 3759.00 IOPS, 14.68 MiB/s [2024-12-09T10:00:20.756Z] 3825.50 IOPS, 14.94 MiB/s [2024-12-09T10:00:21.692Z] 3851.67 IOPS, 15.05 MiB/s [2024-12-09T10:00:22.628Z] 3865.00 IOPS, 15.10 MiB/s [2024-12-09T10:00:23.565Z] 3870.00 IOPS, 15.12 MiB/s [2024-12-09T10:00:24.501Z] 3879.50 IOPS, 15.15 MiB/s [2024-12-09T10:00:25.439Z] 3888.43 IOPS, 15.19 MiB/s [2024-12-09T10:00:26.828Z] 3886.50 IOPS, 15.18 MiB/s [2024-12-09T10:00:27.762Z] 3898.33 IOPS, 15.23 MiB/s [2024-12-09T10:00:27.762Z] 3910.60 IOPS, 15.28 MiB/s 00:48:20.260 Latency(us) 00:48:20.260 [2024-12-09T10:00:27.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:20.260 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:48:20.260 Verification LBA range: start 0x0 length 0x2000 00:48:20.260 TLSTESTn1 : 10.02 3916.08 15.30 0.00 0.00 32623.10 6315.29 25380.31 00:48:20.260 [2024-12-09T10:00:27.762Z] =================================================================================================================== 00:48:20.260 [2024-12-09T10:00:27.762Z] Total : 3916.08 15.30 0.00 0.00 32623.10 6315.29 25380.31 00:48:20.260 { 00:48:20.260 "results": [ 00:48:20.260 { 00:48:20.260 "job": "TLSTESTn1", 00:48:20.260 "core_mask": "0x4", 00:48:20.260 "workload": "verify", 00:48:20.260 "status": "finished", 00:48:20.260 "verify_range": { 00:48:20.260 "start": 0, 00:48:20.260 "length": 8192 00:48:20.260 }, 00:48:20.260 "queue_depth": 128, 00:48:20.260 "io_size": 4096, 00:48:20.260 "runtime": 10.018193, 00:48:20.260 "iops": 3916.0754838721914, 00:48:20.260 "mibps": 15.297169858875748, 00:48:20.260 "io_failed": 0, 00:48:20.260 "io_timeout": 0, 00:48:20.260 "avg_latency_us": 32623.104683375353, 00:48:20.260 "min_latency_us": 6315.2872727272725, 00:48:20.260 "max_latency_us": 25380.305454545454 00:48:20.260 } 00:48:20.260 ], 00:48:20.260 "core_count": 1 00:48:20.260 } 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71807 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71807 ']' 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71807 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71807 00:48:20.260 killing process with pid 71807 00:48:20.260 Received shutdown signal, test time was about 10.000000 seconds 00:48:20.260 00:48:20.260 Latency(us) 00:48:20.260 [2024-12-09T10:00:27.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:20.260 [2024-12-09T10:00:27.762Z] =================================================================================================================== 00:48:20.260 [2024-12-09T10:00:27.762Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71807' 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71807 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71807 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.pnDhFlvg6S 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pnDhFlvg6S 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pnDhFlvg6S 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pnDhFlvg6S 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pnDhFlvg6S 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71935 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71935 /var/tmp/bdevperf.sock 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71935 ']' 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:20.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:20.260 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:20.260 [2024-12-09 10:00:27.734685] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:20.260 [2024-12-09 10:00:27.734793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71935 ] 00:48:20.520 [2024-12-09 10:00:27.884212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:20.520 [2024-12-09 10:00:27.915521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:20.520 [2024-12-09 10:00:27.945020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:20.520 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:20.520 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:20.520 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnDhFlvg6S 00:48:20.779 [2024-12-09 10:00:28.273157] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pnDhFlvg6S': 0100666 00:48:20.779 [2024-12-09 10:00:28.273236] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:48:21.038 request: 00:48:21.038 { 00:48:21.038 "name": "key0", 00:48:21.038 "path": "/tmp/tmp.pnDhFlvg6S", 00:48:21.038 "method": "keyring_file_add_key", 00:48:21.038 "req_id": 1 00:48:21.038 } 00:48:21.038 Got JSON-RPC error response 00:48:21.038 response: 00:48:21.038 { 00:48:21.038 "code": -1, 00:48:21.038 "message": "Operation not permitted" 00:48:21.038 } 00:48:21.038 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:48:21.038 [2024-12-09 10:00:28.529347] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:21.038 [2024-12-09 10:00:28.529448] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:48:21.038 request: 00:48:21.038 { 00:48:21.038 "name": "TLSTEST", 00:48:21.038 "trtype": "tcp", 00:48:21.038 "traddr": "10.0.0.3", 00:48:21.038 "adrfam": "ipv4", 00:48:21.038 "trsvcid": "4420", 00:48:21.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:21.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:21.038 "prchk_reftag": false, 00:48:21.038 "prchk_guard": false, 00:48:21.038 "hdgst": false, 00:48:21.038 "ddgst": false, 00:48:21.038 "psk": "key0", 00:48:21.038 "allow_unrecognized_csi": false, 00:48:21.038 "method": "bdev_nvme_attach_controller", 00:48:21.038 "req_id": 1 00:48:21.038 } 00:48:21.038 Got JSON-RPC error response 00:48:21.038 response: 00:48:21.038 { 00:48:21.038 "code": -126, 00:48:21.038 "message": "Required key not available" 00:48:21.038 } 00:48:21.297 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71935 00:48:21.297 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71935 ']' 00:48:21.297 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71935 00:48:21.297 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:21.297 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:21.297 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71935 00:48:21.297 killing process with pid 71935 00:48:21.297 Received shutdown signal, test time was about 10.000000 seconds 00:48:21.297 00:48:21.297 Latency(us) 00:48:21.297 [2024-12-09T10:00:28.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:21.297 [2024-12-09T10:00:28.799Z] =================================================================================================================== 00:48:21.297 [2024-12-09T10:00:28.799Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71935' 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71935 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71935 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71750 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71750 ']' 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71750 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71750 00:48:21.298 killing process with pid 71750 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71750' 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71750 00:48:21.298 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71750 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71961 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71961 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71961 ']' 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:21.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:21.557 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:21.557 [2024-12-09 10:00:29.031606] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:21.557 [2024-12-09 10:00:29.031696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:21.816 [2024-12-09 10:00:29.178117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:21.816 [2024-12-09 10:00:29.209065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:21.816 [2024-12-09 10:00:29.209136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:21.816 [2024-12-09 10:00:29.209163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:21.816 [2024-12-09 10:00:29.209172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:21.816 [2024-12-09 10:00:29.209179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:21.816 [2024-12-09 10:00:29.209493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:21.816 [2024-12-09 10:00:29.239141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:21.816 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:21.816 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:21.816 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:21.816 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:21.816 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.pnDhFlvg6S 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pnDhFlvg6S 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.pnDhFlvg6S 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pnDhFlvg6S 00:48:22.075 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:48:22.075 [2024-12-09 10:00:29.560517] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:22.334 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:48:22.593 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:48:22.851 [2024-12-09 10:00:30.104645] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:22.851 [2024-12-09 10:00:30.104893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:22.851 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:48:23.111 malloc0 00:48:23.111 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:48:23.369 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pnDhFlvg6S 00:48:23.627 [2024-12-09 10:00:30.951484] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pnDhFlvg6S': 0100666 00:48:23.627 [2024-12-09 10:00:30.951536] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:48:23.627 request: 00:48:23.627 { 00:48:23.627 "name": "key0", 00:48:23.627 "path": "/tmp/tmp.pnDhFlvg6S", 00:48:23.627 "method": "keyring_file_add_key", 00:48:23.627 "req_id": 1 00:48:23.627 } 00:48:23.627 Got JSON-RPC error response 00:48:23.627 response: 00:48:23.627 { 00:48:23.627 "code": -1, 00:48:23.627 "message": "Operation not permitted" 00:48:23.627 } 00:48:23.627 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:48:23.886 [2024-12-09 10:00:31.263572] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:48:23.886 [2024-12-09 10:00:31.263644] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:48:23.886 request: 00:48:23.886 { 00:48:23.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:23.886 "host": "nqn.2016-06.io.spdk:host1", 00:48:23.886 "psk": "key0", 00:48:23.886 "method": "nvmf_subsystem_add_host", 00:48:23.886 "req_id": 1 00:48:23.886 } 00:48:23.886 Got JSON-RPC error response 00:48:23.886 response: 00:48:23.886 { 00:48:23.886 "code": -32603, 00:48:23.886 "message": "Internal error" 00:48:23.886 } 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71961 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71961 ']' 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71961 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71961 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71961' 00:48:23.886 killing process with pid 71961 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71961 00:48:23.886 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71961 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.pnDhFlvg6S 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72024 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72024 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72024 ']' 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:24.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:24.145 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:24.145 [2024-12-09 10:00:31.569647] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:24.145 [2024-12-09 10:00:31.569735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:24.403 [2024-12-09 10:00:31.715174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:24.403 [2024-12-09 10:00:31.747374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:24.403 [2024-12-09 10:00:31.747435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:24.403 [2024-12-09 10:00:31.747448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:24.403 [2024-12-09 10:00:31.747457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:24.403 [2024-12-09 10:00:31.747464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:24.403 [2024-12-09 10:00:31.747776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:24.403 [2024-12-09 10:00:31.778159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:24.403 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:24.403 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:24.403 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:24.403 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:24.403 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:24.403 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:24.403 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.pnDhFlvg6S 00:48:24.403 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pnDhFlvg6S 00:48:24.403 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:48:24.662 [2024-12-09 10:00:32.115055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:24.662 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:48:25.230 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:48:25.230 [2024-12-09 10:00:32.667196] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:25.230 [2024-12-09 10:00:32.667424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:25.230 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:48:25.489 malloc0 00:48:25.489 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:48:25.749 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pnDhFlvg6S 00:48:26.008 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72072 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72072 /var/tmp/bdevperf.sock 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72072 ']' 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:26.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:26.267 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:26.525 [2024-12-09 10:00:33.770875] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:26.525 [2024-12-09 10:00:33.770969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72072 ] 00:48:26.525 [2024-12-09 10:00:33.913899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:26.525 [2024-12-09 10:00:33.946314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:26.525 [2024-12-09 10:00:33.976187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:26.783 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:26.783 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:26.783 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnDhFlvg6S 00:48:27.095 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:48:27.095 [2024-12-09 10:00:34.584054] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:27.380 TLSTESTn1 00:48:27.380 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:48:27.639 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:48:27.639 "subsystems": [ 00:48:27.639 { 00:48:27.639 "subsystem": "keyring", 00:48:27.639 "config": [ 00:48:27.639 { 00:48:27.639 "method": "keyring_file_add_key", 00:48:27.639 "params": { 00:48:27.639 "name": "key0", 00:48:27.639 "path": "/tmp/tmp.pnDhFlvg6S" 00:48:27.639 } 00:48:27.639 } 00:48:27.639 ] 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "subsystem": "iobuf", 00:48:27.639 "config": [ 00:48:27.639 { 00:48:27.639 "method": "iobuf_set_options", 00:48:27.639 "params": { 00:48:27.639 "small_pool_count": 8192, 00:48:27.639 "large_pool_count": 1024, 00:48:27.639 "small_bufsize": 8192, 00:48:27.639 "large_bufsize": 135168, 00:48:27.639 "enable_numa": false 00:48:27.639 } 00:48:27.639 } 00:48:27.639 ] 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "subsystem": "sock", 00:48:27.639 "config": [ 00:48:27.639 { 00:48:27.639 "method": "sock_set_default_impl", 00:48:27.639 "params": { 00:48:27.639 "impl_name": "uring" 00:48:27.639 } 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "method": "sock_impl_set_options", 00:48:27.639 "params": { 00:48:27.639 "impl_name": "ssl", 00:48:27.639 "recv_buf_size": 4096, 00:48:27.639 "send_buf_size": 4096, 00:48:27.639 "enable_recv_pipe": true, 00:48:27.639 "enable_quickack": false, 00:48:27.639 "enable_placement_id": 0, 00:48:27.639 "enable_zerocopy_send_server": true, 00:48:27.639 "enable_zerocopy_send_client": false, 00:48:27.639 "zerocopy_threshold": 0, 00:48:27.639 "tls_version": 0, 00:48:27.639 "enable_ktls": false 00:48:27.639 } 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "method": "sock_impl_set_options", 00:48:27.639 "params": { 00:48:27.639 "impl_name": "posix", 00:48:27.639 "recv_buf_size": 2097152, 00:48:27.639 "send_buf_size": 2097152, 00:48:27.639 "enable_recv_pipe": true, 00:48:27.639 "enable_quickack": false, 00:48:27.639 "enable_placement_id": 0, 00:48:27.639 "enable_zerocopy_send_server": true, 00:48:27.639 "enable_zerocopy_send_client": false, 00:48:27.639 "zerocopy_threshold": 0, 00:48:27.639 "tls_version": 0, 00:48:27.639 "enable_ktls": false 00:48:27.639 } 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "method": "sock_impl_set_options", 00:48:27.639 "params": { 00:48:27.639 "impl_name": "uring", 00:48:27.639 "recv_buf_size": 2097152, 00:48:27.639 "send_buf_size": 2097152, 00:48:27.639 "enable_recv_pipe": true, 00:48:27.639 "enable_quickack": false, 00:48:27.639 "enable_placement_id": 0, 00:48:27.639 "enable_zerocopy_send_server": false, 00:48:27.639 "enable_zerocopy_send_client": false, 00:48:27.639 "zerocopy_threshold": 0, 00:48:27.639 "tls_version": 0, 00:48:27.639 "enable_ktls": false 00:48:27.639 } 00:48:27.639 } 00:48:27.639 ] 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "subsystem": "vmd", 00:48:27.639 "config": [] 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "subsystem": "accel", 00:48:27.639 "config": [ 00:48:27.639 { 00:48:27.639 "method": "accel_set_options", 00:48:27.639 "params": { 00:48:27.639 "small_cache_size": 128, 00:48:27.639 "large_cache_size": 16, 00:48:27.639 "task_count": 2048, 00:48:27.639 "sequence_count": 2048, 00:48:27.639 "buf_count": 2048 00:48:27.639 } 00:48:27.639 } 00:48:27.639 ] 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "subsystem": "bdev", 00:48:27.639 "config": [ 00:48:27.639 { 00:48:27.639 "method": "bdev_set_options", 00:48:27.639 "params": { 00:48:27.639 "bdev_io_pool_size": 65535, 00:48:27.639 "bdev_io_cache_size": 256, 00:48:27.639 "bdev_auto_examine": true, 00:48:27.639 "iobuf_small_cache_size": 128, 00:48:27.639 "iobuf_large_cache_size": 16 00:48:27.639 } 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "method": "bdev_raid_set_options", 00:48:27.639 "params": { 00:48:27.639 "process_window_size_kb": 1024, 00:48:27.639 "process_max_bandwidth_mb_sec": 0 00:48:27.639 } 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "method": "bdev_iscsi_set_options", 00:48:27.639 "params": { 00:48:27.639 "timeout_sec": 30 00:48:27.639 } 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "method": "bdev_nvme_set_options", 00:48:27.639 "params": { 00:48:27.639 "action_on_timeout": "none", 00:48:27.639 "timeout_us": 0, 00:48:27.639 "timeout_admin_us": 0, 00:48:27.639 "keep_alive_timeout_ms": 10000, 00:48:27.639 "arbitration_burst": 0, 00:48:27.639 "low_priority_weight": 0, 00:48:27.639 "medium_priority_weight": 0, 00:48:27.639 "high_priority_weight": 0, 00:48:27.639 "nvme_adminq_poll_period_us": 10000, 00:48:27.639 "nvme_ioq_poll_period_us": 0, 00:48:27.639 "io_queue_requests": 0, 00:48:27.639 "delay_cmd_submit": true, 00:48:27.639 "transport_retry_count": 4, 00:48:27.639 "bdev_retry_count": 3, 00:48:27.639 "transport_ack_timeout": 0, 00:48:27.639 "ctrlr_loss_timeout_sec": 0, 00:48:27.639 "reconnect_delay_sec": 0, 00:48:27.639 "fast_io_fail_timeout_sec": 0, 00:48:27.639 "disable_auto_failback": false, 00:48:27.639 "generate_uuids": false, 00:48:27.639 "transport_tos": 0, 00:48:27.639 "nvme_error_stat": false, 00:48:27.639 "rdma_srq_size": 0, 00:48:27.639 "io_path_stat": false, 00:48:27.639 "allow_accel_sequence": false, 00:48:27.639 "rdma_max_cq_size": 0, 00:48:27.639 "rdma_cm_event_timeout_ms": 0, 00:48:27.639 "dhchap_digests": [ 00:48:27.639 "sha256", 00:48:27.639 "sha384", 00:48:27.639 "sha512" 00:48:27.639 ], 00:48:27.639 "dhchap_dhgroups": [ 00:48:27.639 "null", 00:48:27.639 "ffdhe2048", 00:48:27.639 "ffdhe3072", 00:48:27.639 "ffdhe4096", 00:48:27.639 "ffdhe6144", 00:48:27.639 "ffdhe8192" 00:48:27.639 ] 00:48:27.639 } 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "method": "bdev_nvme_set_hotplug", 00:48:27.639 "params": { 00:48:27.639 "period_us": 100000, 00:48:27.639 "enable": false 00:48:27.639 } 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "method": "bdev_malloc_create", 00:48:27.639 "params": { 00:48:27.639 "name": "malloc0", 00:48:27.639 "num_blocks": 8192, 00:48:27.639 "block_size": 4096, 00:48:27.639 "physical_block_size": 4096, 00:48:27.639 "uuid": "033676c3-3c3f-4433-ab1d-bc6ed4ed5e3b", 00:48:27.639 "optimal_io_boundary": 0, 00:48:27.639 "md_size": 0, 00:48:27.639 "dif_type": 0, 00:48:27.639 "dif_is_head_of_md": false, 00:48:27.639 "dif_pi_format": 0 00:48:27.639 } 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "method": "bdev_wait_for_examine" 00:48:27.639 } 00:48:27.639 ] 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "subsystem": "nbd", 00:48:27.639 "config": [] 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "subsystem": "scheduler", 00:48:27.639 "config": [ 00:48:27.639 { 00:48:27.639 "method": "framework_set_scheduler", 00:48:27.639 "params": { 00:48:27.639 "name": "static" 00:48:27.639 } 00:48:27.639 } 00:48:27.639 ] 00:48:27.639 }, 00:48:27.639 { 00:48:27.639 "subsystem": "nvmf", 00:48:27.640 "config": [ 00:48:27.640 { 00:48:27.640 "method": "nvmf_set_config", 00:48:27.640 "params": { 00:48:27.640 "discovery_filter": "match_any", 00:48:27.640 "admin_cmd_passthru": { 00:48:27.640 "identify_ctrlr": false 00:48:27.640 }, 00:48:27.640 "dhchap_digests": [ 00:48:27.640 "sha256", 00:48:27.640 "sha384", 00:48:27.640 "sha512" 00:48:27.640 ], 00:48:27.640 "dhchap_dhgroups": [ 00:48:27.640 "null", 00:48:27.640 "ffdhe2048", 00:48:27.640 "ffdhe3072", 00:48:27.640 "ffdhe4096", 00:48:27.640 "ffdhe6144", 00:48:27.640 "ffdhe8192" 00:48:27.640 ] 00:48:27.640 } 00:48:27.640 }, 00:48:27.640 { 00:48:27.640 "method": "nvmf_set_max_subsystems", 00:48:27.640 "params": { 00:48:27.640 "max_subsystems": 1024 00:48:27.640 } 00:48:27.640 }, 00:48:27.640 { 00:48:27.640 "method": "nvmf_set_crdt", 00:48:27.640 "params": { 00:48:27.640 "crdt1": 0, 00:48:27.640 "crdt2": 0, 00:48:27.640 "crdt3": 0 00:48:27.640 } 00:48:27.640 }, 00:48:27.640 { 00:48:27.640 "method": "nvmf_create_transport", 00:48:27.640 "params": { 00:48:27.640 "trtype": "TCP", 00:48:27.640 "max_queue_depth": 128, 00:48:27.640 "max_io_qpairs_per_ctrlr": 127, 00:48:27.640 "in_capsule_data_size": 4096, 00:48:27.640 "max_io_size": 131072, 00:48:27.640 "io_unit_size": 131072, 00:48:27.640 "max_aq_depth": 128, 00:48:27.640 "num_shared_buffers": 511, 00:48:27.640 "buf_cache_size": 4294967295, 00:48:27.640 "dif_insert_or_strip": false, 00:48:27.640 "zcopy": false, 00:48:27.640 "c2h_success": false, 00:48:27.640 "sock_priority": 0, 00:48:27.640 "abort_timeout_sec": 1, 00:48:27.640 "ack_timeout": 0, 00:48:27.640 "data_wr_pool_size": 0 00:48:27.640 } 00:48:27.640 }, 00:48:27.640 { 00:48:27.640 "method": "nvmf_create_subsystem", 00:48:27.640 "params": { 00:48:27.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:27.640 "allow_any_host": false, 00:48:27.640 "serial_number": "SPDK00000000000001", 00:48:27.640 "model_number": "SPDK bdev Controller", 00:48:27.640 "max_namespaces": 10, 00:48:27.640 "min_cntlid": 1, 00:48:27.640 "max_cntlid": 65519, 00:48:27.640 "ana_reporting": false 00:48:27.640 } 00:48:27.640 }, 00:48:27.640 { 00:48:27.640 "method": "nvmf_subsystem_add_host", 00:48:27.640 "params": { 00:48:27.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:27.640 "host": "nqn.2016-06.io.spdk:host1", 00:48:27.640 "psk": "key0" 00:48:27.640 } 00:48:27.640 }, 00:48:27.640 { 00:48:27.640 "method": "nvmf_subsystem_add_ns", 00:48:27.640 "params": { 00:48:27.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:27.640 "namespace": { 00:48:27.640 "nsid": 1, 00:48:27.640 "bdev_name": "malloc0", 00:48:27.640 "nguid": "033676C33C3F4433AB1DBC6ED4ED5E3B", 00:48:27.640 "uuid": "033676c3-3c3f-4433-ab1d-bc6ed4ed5e3b", 00:48:27.640 "no_auto_visible": false 00:48:27.640 } 00:48:27.640 } 00:48:27.640 }, 00:48:27.640 { 00:48:27.640 "method": "nvmf_subsystem_add_listener", 00:48:27.640 "params": { 00:48:27.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:27.640 "listen_address": { 00:48:27.640 "trtype": "TCP", 00:48:27.640 "adrfam": "IPv4", 00:48:27.640 "traddr": "10.0.0.3", 00:48:27.640 "trsvcid": "4420" 00:48:27.640 }, 00:48:27.640 "secure_channel": true 00:48:27.640 } 00:48:27.640 } 00:48:27.640 ] 00:48:27.640 } 00:48:27.640 ] 00:48:27.640 }' 00:48:27.640 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:48:27.899 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:48:27.899 "subsystems": [ 00:48:27.899 { 00:48:27.899 "subsystem": "keyring", 00:48:27.899 "config": [ 00:48:27.899 { 00:48:27.899 "method": "keyring_file_add_key", 00:48:27.899 "params": { 00:48:27.899 "name": "key0", 00:48:27.899 "path": "/tmp/tmp.pnDhFlvg6S" 00:48:27.899 } 00:48:27.899 } 00:48:27.899 ] 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "subsystem": "iobuf", 00:48:27.899 "config": [ 00:48:27.899 { 00:48:27.899 "method": "iobuf_set_options", 00:48:27.899 "params": { 00:48:27.899 "small_pool_count": 8192, 00:48:27.899 "large_pool_count": 1024, 00:48:27.899 "small_bufsize": 8192, 00:48:27.899 "large_bufsize": 135168, 00:48:27.899 "enable_numa": false 00:48:27.899 } 00:48:27.899 } 00:48:27.899 ] 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "subsystem": "sock", 00:48:27.899 "config": [ 00:48:27.899 { 00:48:27.899 "method": "sock_set_default_impl", 00:48:27.899 "params": { 00:48:27.899 "impl_name": "uring" 00:48:27.899 } 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "method": "sock_impl_set_options", 00:48:27.899 "params": { 00:48:27.899 "impl_name": "ssl", 00:48:27.899 "recv_buf_size": 4096, 00:48:27.899 "send_buf_size": 4096, 00:48:27.899 "enable_recv_pipe": true, 00:48:27.899 "enable_quickack": false, 00:48:27.899 "enable_placement_id": 0, 00:48:27.899 "enable_zerocopy_send_server": true, 00:48:27.899 "enable_zerocopy_send_client": false, 00:48:27.899 "zerocopy_threshold": 0, 00:48:27.899 "tls_version": 0, 00:48:27.899 "enable_ktls": false 00:48:27.899 } 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "method": "sock_impl_set_options", 00:48:27.899 "params": { 00:48:27.899 "impl_name": "posix", 00:48:27.899 "recv_buf_size": 2097152, 00:48:27.899 "send_buf_size": 2097152, 00:48:27.899 "enable_recv_pipe": true, 00:48:27.899 "enable_quickack": false, 00:48:27.899 "enable_placement_id": 0, 00:48:27.899 "enable_zerocopy_send_server": true, 00:48:27.899 "enable_zerocopy_send_client": false, 00:48:27.899 "zerocopy_threshold": 0, 00:48:27.899 "tls_version": 0, 00:48:27.899 "enable_ktls": false 00:48:27.899 } 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "method": "sock_impl_set_options", 00:48:27.899 "params": { 00:48:27.899 "impl_name": "uring", 00:48:27.899 "recv_buf_size": 2097152, 00:48:27.899 "send_buf_size": 2097152, 00:48:27.899 "enable_recv_pipe": true, 00:48:27.899 "enable_quickack": false, 00:48:27.899 "enable_placement_id": 0, 00:48:27.899 "enable_zerocopy_send_server": false, 00:48:27.899 "enable_zerocopy_send_client": false, 00:48:27.899 "zerocopy_threshold": 0, 00:48:27.899 "tls_version": 0, 00:48:27.899 "enable_ktls": false 00:48:27.899 } 00:48:27.899 } 00:48:27.899 ] 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "subsystem": "vmd", 00:48:27.899 "config": [] 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "subsystem": "accel", 00:48:27.899 "config": [ 00:48:27.899 { 00:48:27.899 "method": "accel_set_options", 00:48:27.899 "params": { 00:48:27.899 "small_cache_size": 128, 00:48:27.899 "large_cache_size": 16, 00:48:27.899 "task_count": 2048, 00:48:27.899 "sequence_count": 2048, 00:48:27.899 "buf_count": 2048 00:48:27.899 } 00:48:27.899 } 00:48:27.899 ] 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "subsystem": "bdev", 00:48:27.899 "config": [ 00:48:27.899 { 00:48:27.899 "method": "bdev_set_options", 00:48:27.899 "params": { 00:48:27.899 "bdev_io_pool_size": 65535, 00:48:27.899 "bdev_io_cache_size": 256, 00:48:27.899 "bdev_auto_examine": true, 00:48:27.899 "iobuf_small_cache_size": 128, 00:48:27.899 "iobuf_large_cache_size": 16 00:48:27.899 } 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "method": "bdev_raid_set_options", 00:48:27.899 "params": { 00:48:27.899 "process_window_size_kb": 1024, 00:48:27.899 "process_max_bandwidth_mb_sec": 0 00:48:27.899 } 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "method": "bdev_iscsi_set_options", 00:48:27.899 "params": { 00:48:27.899 "timeout_sec": 30 00:48:27.899 } 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "method": "bdev_nvme_set_options", 00:48:27.899 "params": { 00:48:27.899 "action_on_timeout": "none", 00:48:27.899 "timeout_us": 0, 00:48:27.899 "timeout_admin_us": 0, 00:48:27.899 "keep_alive_timeout_ms": 10000, 00:48:27.899 "arbitration_burst": 0, 00:48:27.899 "low_priority_weight": 0, 00:48:27.899 "medium_priority_weight": 0, 00:48:27.899 "high_priority_weight": 0, 00:48:27.899 "nvme_adminq_poll_period_us": 10000, 00:48:27.899 "nvme_ioq_poll_period_us": 0, 00:48:27.899 "io_queue_requests": 512, 00:48:27.899 "delay_cmd_submit": true, 00:48:27.899 "transport_retry_count": 4, 00:48:27.899 "bdev_retry_count": 3, 00:48:27.899 "transport_ack_timeout": 0, 00:48:27.899 "ctrlr_loss_timeout_sec": 0, 00:48:27.899 "reconnect_delay_sec": 0, 00:48:27.899 "fast_io_fail_timeout_sec": 0, 00:48:27.899 "disable_auto_failback": false, 00:48:27.899 "generate_uuids": false, 00:48:27.899 "transport_tos": 0, 00:48:27.899 "nvme_error_stat": false, 00:48:27.899 "rdma_srq_size": 0, 00:48:27.899 "io_path_stat": false, 00:48:27.899 "allow_accel_sequence": false, 00:48:27.899 "rdma_max_cq_size": 0, 00:48:27.899 "rdma_cm_event_timeout_ms": 0, 00:48:27.899 "dhchap_digests": [ 00:48:27.899 "sha256", 00:48:27.899 "sha384", 00:48:27.899 "sha512" 00:48:27.899 ], 00:48:27.899 "dhchap_dhgroups": [ 00:48:27.899 "null", 00:48:27.899 "ffdhe2048", 00:48:27.899 "ffdhe3072", 00:48:27.899 "ffdhe4096", 00:48:27.899 "ffdhe6144", 00:48:27.899 "ffdhe8192" 00:48:27.899 ] 00:48:27.899 } 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "method": "bdev_nvme_attach_controller", 00:48:27.899 "params": { 00:48:27.899 "name": "TLSTEST", 00:48:27.899 "trtype": "TCP", 00:48:27.899 "adrfam": "IPv4", 00:48:27.899 "traddr": "10.0.0.3", 00:48:27.899 "trsvcid": "4420", 00:48:27.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:27.899 "prchk_reftag": false, 00:48:27.899 "prchk_guard": false, 00:48:27.899 "ctrlr_loss_timeout_sec": 0, 00:48:27.899 "reconnect_delay_sec": 0, 00:48:27.899 "fast_io_fail_timeout_sec": 0, 00:48:27.899 "psk": "key0", 00:48:27.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:27.899 "hdgst": false, 00:48:27.899 "ddgst": false, 00:48:27.899 "multipath": "multipath" 00:48:27.899 } 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "method": "bdev_nvme_set_hotplug", 00:48:27.899 "params": { 00:48:27.899 "period_us": 100000, 00:48:27.899 "enable": false 00:48:27.899 } 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "method": "bdev_wait_for_examine" 00:48:27.899 } 00:48:27.899 ] 00:48:27.899 }, 00:48:27.899 { 00:48:27.899 "subsystem": "nbd", 00:48:27.899 "config": [] 00:48:27.899 } 00:48:27.899 ] 00:48:27.899 }' 00:48:27.899 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72072 00:48:27.899 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72072 ']' 00:48:27.899 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72072 00:48:27.899 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:27.899 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:27.899 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72072 00:48:28.159 killing process with pid 72072 00:48:28.159 Received shutdown signal, test time was about 10.000000 seconds 00:48:28.159 00:48:28.159 Latency(us) 00:48:28.159 [2024-12-09T10:00:35.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:28.159 [2024-12-09T10:00:35.661Z] =================================================================================================================== 00:48:28.159 [2024-12-09T10:00:35.661Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72072' 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72072 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72072 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72024 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72024 ']' 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72024 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72024 00:48:28.159 killing process with pid 72024 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72024' 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72024 00:48:28.159 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72024 00:48:28.419 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:48:28.419 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:28.419 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:28.419 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:28.419 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:48:28.419 "subsystems": [ 00:48:28.419 { 00:48:28.419 "subsystem": "keyring", 00:48:28.419 "config": [ 00:48:28.419 { 00:48:28.419 "method": "keyring_file_add_key", 00:48:28.419 "params": { 00:48:28.419 "name": "key0", 00:48:28.419 "path": "/tmp/tmp.pnDhFlvg6S" 00:48:28.419 } 00:48:28.419 } 00:48:28.419 ] 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "subsystem": "iobuf", 00:48:28.419 "config": [ 00:48:28.419 { 00:48:28.419 "method": "iobuf_set_options", 00:48:28.419 "params": { 00:48:28.419 "small_pool_count": 8192, 00:48:28.419 "large_pool_count": 1024, 00:48:28.419 "small_bufsize": 8192, 00:48:28.419 "large_bufsize": 135168, 00:48:28.419 "enable_numa": false 00:48:28.419 } 00:48:28.419 } 00:48:28.419 ] 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "subsystem": "sock", 00:48:28.419 "config": [ 00:48:28.419 { 00:48:28.419 "method": "sock_set_default_impl", 00:48:28.419 "params": { 00:48:28.419 "impl_name": "uring" 00:48:28.419 } 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "method": "sock_impl_set_options", 00:48:28.419 "params": { 00:48:28.419 "impl_name": "ssl", 00:48:28.419 "recv_buf_size": 4096, 00:48:28.419 "send_buf_size": 4096, 00:48:28.419 "enable_recv_pipe": true, 00:48:28.419 "enable_quickack": false, 00:48:28.419 "enable_placement_id": 0, 00:48:28.419 "enable_zerocopy_send_server": true, 00:48:28.419 "enable_zerocopy_send_client": false, 00:48:28.419 "zerocopy_threshold": 0, 00:48:28.419 "tls_version": 0, 00:48:28.419 "enable_ktls": false 00:48:28.419 } 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "method": "sock_impl_set_options", 00:48:28.419 "params": { 00:48:28.419 "impl_name": "posix", 00:48:28.419 "recv_buf_size": 2097152, 00:48:28.419 "send_buf_size": 2097152, 00:48:28.419 "enable_recv_pipe": true, 00:48:28.419 "enable_quickack": false, 00:48:28.419 "enable_placement_id": 0, 00:48:28.419 "enable_zerocopy_send_server": true, 00:48:28.419 "enable_zerocopy_send_client": false, 00:48:28.419 "zerocopy_threshold": 0, 00:48:28.419 "tls_version": 0, 00:48:28.419 "enable_ktls": false 00:48:28.419 } 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "method": "sock_impl_set_options", 00:48:28.419 "params": { 00:48:28.419 "impl_name": "uring", 00:48:28.419 "recv_buf_size": 2097152, 00:48:28.419 "send_buf_size": 2097152, 00:48:28.419 "enable_recv_pipe": true, 00:48:28.419 "enable_quickack": false, 00:48:28.419 "enable_placement_id": 0, 00:48:28.419 "enable_zerocopy_send_server": false, 00:48:28.419 "enable_zerocopy_send_client": false, 00:48:28.419 "zerocopy_threshold": 0, 00:48:28.419 "tls_version": 0, 00:48:28.419 "enable_ktls": false 00:48:28.419 } 00:48:28.419 } 00:48:28.419 ] 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "subsystem": "vmd", 00:48:28.419 "config": [] 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "subsystem": "accel", 00:48:28.419 "config": [ 00:48:28.419 { 00:48:28.419 "method": "accel_set_options", 00:48:28.419 "params": { 00:48:28.419 "small_cache_size": 128, 00:48:28.419 "large_cache_size": 16, 00:48:28.419 "task_count": 2048, 00:48:28.419 "sequence_count": 2048, 00:48:28.419 "buf_count": 2048 00:48:28.419 } 00:48:28.419 } 00:48:28.419 ] 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "subsystem": "bdev", 00:48:28.419 "config": [ 00:48:28.419 { 00:48:28.419 "method": "bdev_set_options", 00:48:28.419 "params": { 00:48:28.419 "bdev_io_pool_size": 65535, 00:48:28.419 "bdev_io_cache_size": 256, 00:48:28.419 "bdev_auto_examine": true, 00:48:28.419 "iobuf_small_cache_size": 128, 00:48:28.419 "iobuf_large_cache_size": 16 00:48:28.419 } 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "method": "bdev_raid_set_options", 00:48:28.419 "params": { 00:48:28.419 "process_window_size_kb": 1024, 00:48:28.419 "process_max_bandwidth_mb_sec": 0 00:48:28.419 } 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "method": "bdev_iscsi_set_options", 00:48:28.419 "params": { 00:48:28.419 "timeout_sec": 30 00:48:28.419 } 00:48:28.419 }, 00:48:28.419 { 00:48:28.419 "method": "bdev_nvme_set_options", 00:48:28.419 "params": { 00:48:28.419 "action_on_timeout": "none", 00:48:28.419 "timeout_us": 0, 00:48:28.419 "timeout_admin_us": 0, 00:48:28.419 "keep_alive_timeout_ms": 10000, 00:48:28.419 "arbitration_burst": 0, 00:48:28.419 "low_priority_weight": 0, 00:48:28.419 "medium_priority_weight": 0, 00:48:28.419 "high_priority_weight": 0, 00:48:28.419 "nvme_adminq_poll_period_us": 10000, 00:48:28.419 "nvme_ioq_poll_period_us": 0, 00:48:28.419 "io_queue_requests": 0, 00:48:28.419 "delay_cmd_submit": true, 00:48:28.419 "transport_retry_count": 4, 00:48:28.419 "bdev_retry_count": 3, 00:48:28.419 "transport_ack_timeout": 0, 00:48:28.419 "ctrlr_loss_timeout_sec": 0, 00:48:28.419 "reconnect_delay_sec": 0, 00:48:28.419 "fast_io_fail_timeout_sec": 0, 00:48:28.419 "disable_auto_failback": false, 00:48:28.419 "generate_uuids": false, 00:48:28.419 "transport_tos": 0, 00:48:28.419 "nvme_error_stat": false, 00:48:28.419 "rdma_srq_size": 0, 00:48:28.419 "io_path_stat": false, 00:48:28.419 "allow_accel_sequence": false, 00:48:28.419 "rdma_max_cq_size": 0, 00:48:28.419 "rdma_cm_event_timeout_ms": 0, 00:48:28.419 "dhchap_digests": [ 00:48:28.419 "sha256", 00:48:28.419 "sha384", 00:48:28.419 "sha512" 00:48:28.419 ], 00:48:28.419 "dhchap_dhgroups": [ 00:48:28.419 "null", 00:48:28.419 "ffdhe2048", 00:48:28.419 "ffdhe3072", 00:48:28.419 "ffdhe4096", 00:48:28.419 "ffdhe6144", 00:48:28.419 "ffdhe8192" 00:48:28.419 ] 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "bdev_nvme_set_hotplug", 00:48:28.420 "params": { 00:48:28.420 "period_us": 100000, 00:48:28.420 "enable": false 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "bdev_malloc_create", 00:48:28.420 "params": { 00:48:28.420 "name": "malloc0", 00:48:28.420 "num_blocks": 8192, 00:48:28.420 "block_size": 4096, 00:48:28.420 "physical_block_size": 4096, 00:48:28.420 "uuid": "033676c3-3c3f-4433-ab1d-bc6ed4ed5e3b", 00:48:28.420 "optimal_io_boundary": 0, 00:48:28.420 "md_size": 0, 00:48:28.420 "dif_type": 0, 00:48:28.420 "dif_is_head_of_md": false, 00:48:28.420 "dif_pi_format": 0 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "bdev_wait_for_examine" 00:48:28.420 } 00:48:28.420 ] 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "subsystem": "nbd", 00:48:28.420 "config": [] 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "subsystem": "scheduler", 00:48:28.420 "config": [ 00:48:28.420 { 00:48:28.420 "method": "framework_set_scheduler", 00:48:28.420 "params": { 00:48:28.420 "name": "static" 00:48:28.420 } 00:48:28.420 } 00:48:28.420 ] 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "subsystem": "nvmf", 00:48:28.420 "config": [ 00:48:28.420 { 00:48:28.420 "method": "nvmf_set_config", 00:48:28.420 "params": { 00:48:28.420 "discovery_filter": "match_any", 00:48:28.420 "admin_cmd_passthru": { 00:48:28.420 "identify_ctrlr": false 00:48:28.420 }, 00:48:28.420 "dhchap_digests": [ 00:48:28.420 "sha256", 00:48:28.420 "sha384", 00:48:28.420 "sha512" 00:48:28.420 ], 00:48:28.420 "dhchap_dhgroups": [ 00:48:28.420 "null", 00:48:28.420 "ffdhe2048", 00:48:28.420 "ffdhe3072", 00:48:28.420 "ffdhe4096", 00:48:28.420 "ffdhe6144", 00:48:28.420 "ffdhe8192" 00:48:28.420 ] 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "nvmf_set_max_subsystems", 00:48:28.420 "params": { 00:48:28.420 "max_subsystems": 1024 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "nvmf_set_crdt", 00:48:28.420 "params": { 00:48:28.420 "crdt1": 0, 00:48:28.420 "crdt2": 0, 00:48:28.420 "crdt3": 0 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "nvmf_create_transport", 00:48:28.420 "params": { 00:48:28.420 "trtype": "TCP", 00:48:28.420 "max_queue_depth": 128, 00:48:28.420 "max_io_qpairs_per_ctrlr": 127, 00:48:28.420 "in_capsule_data_size": 4096, 00:48:28.420 "max_io_size": 131072, 00:48:28.420 "io_unit_size": 131072, 00:48:28.420 "max_aq_depth": 128, 00:48:28.420 "num_shared_buffers": 511, 00:48:28.420 "buf_cache_size": 4294967295, 00:48:28.420 "dif_insert_or_strip": false, 00:48:28.420 "zcopy": false, 00:48:28.420 "c2h_success": false, 00:48:28.420 "sock_priority": 0, 00:48:28.420 "abort_timeout_sec": 1, 00:48:28.420 "ack_timeout": 0, 00:48:28.420 "data_wr_pool_size": 0 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "nvmf_create_subsystem", 00:48:28.420 "params": { 00:48:28.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:28.420 "allow_any_host": false, 00:48:28.420 "serial_number": "SPDK00000000000001", 00:48:28.420 "model_number": "SPDK bdev Controller", 00:48:28.420 "max_namespaces": 10, 00:48:28.420 "min_cntlid": 1, 00:48:28.420 "max_cntlid": 65519, 00:48:28.420 "ana_reporting": false 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "nvmf_subsystem_add_host", 00:48:28.420 "params": { 00:48:28.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:28.420 "host": "nqn.2016-06.io.spdk:host1", 00:48:28.420 "psk": "key0" 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "nvmf_subsystem_add_ns", 00:48:28.420 "params": { 00:48:28.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:28.420 "namespace": { 00:48:28.420 "nsid": 1, 00:48:28.420 "bdev_name": "malloc0", 00:48:28.420 "nguid": "033676C33C3F4433AB1DBC6ED4ED5E3B", 00:48:28.420 "uuid": "033676c3-3c3f-4433-ab1d-bc6ed4ed5e3b", 00:48:28.420 "no_auto_visible": false 00:48:28.420 } 00:48:28.420 } 00:48:28.420 }, 00:48:28.420 { 00:48:28.420 "method": "nvmf_subsystem_add_listener", 00:48:28.420 "params": { 00:48:28.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:28.420 "listen_address": { 00:48:28.420 "trtype": "TCP", 00:48:28.420 "adrfam": "IPv4", 00:48:28.420 "traddr": "10.0.0.3", 00:48:28.420 "trsvcid": "4420" 00:48:28.420 }, 00:48:28.420 "secure_channel": true 00:48:28.420 } 00:48:28.420 } 00:48:28.420 ] 00:48:28.420 } 00:48:28.420 ] 00:48:28.420 }' 00:48:28.420 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72114 00:48:28.420 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:48:28.420 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72114 00:48:28.420 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72114 ']' 00:48:28.420 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:28.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:28.420 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:28.420 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:28.420 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:28.420 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:28.420 [2024-12-09 10:00:35.886531] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:28.420 [2024-12-09 10:00:35.887341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:28.679 [2024-12-09 10:00:36.041090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:28.679 [2024-12-09 10:00:36.079609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:28.679 [2024-12-09 10:00:36.079667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:28.679 [2024-12-09 10:00:36.079683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:28.679 [2024-12-09 10:00:36.079693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:28.679 [2024-12-09 10:00:36.079701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:28.679 [2024-12-09 10:00:36.080146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:28.938 [2024-12-09 10:00:36.229246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:28.938 [2024-12-09 10:00:36.292559] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:28.938 [2024-12-09 10:00:36.324488] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:28.938 [2024-12-09 10:00:36.324725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:29.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72146 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72146 /var/tmp/bdevperf.sock 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72146 ']' 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:29.505 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:48:29.505 "subsystems": [ 00:48:29.505 { 00:48:29.505 "subsystem": "keyring", 00:48:29.505 "config": [ 00:48:29.505 { 00:48:29.505 "method": "keyring_file_add_key", 00:48:29.505 "params": { 00:48:29.505 "name": "key0", 00:48:29.505 "path": "/tmp/tmp.pnDhFlvg6S" 00:48:29.505 } 00:48:29.505 } 00:48:29.505 ] 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "subsystem": "iobuf", 00:48:29.505 "config": [ 00:48:29.505 { 00:48:29.505 "method": "iobuf_set_options", 00:48:29.505 "params": { 00:48:29.505 "small_pool_count": 8192, 00:48:29.505 "large_pool_count": 1024, 00:48:29.505 "small_bufsize": 8192, 00:48:29.505 "large_bufsize": 135168, 00:48:29.505 "enable_numa": false 00:48:29.505 } 00:48:29.505 } 00:48:29.505 ] 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "subsystem": "sock", 00:48:29.505 "config": [ 00:48:29.505 { 00:48:29.505 "method": "sock_set_default_impl", 00:48:29.505 "params": { 00:48:29.505 "impl_name": "uring" 00:48:29.505 } 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "method": "sock_impl_set_options", 00:48:29.505 "params": { 00:48:29.505 "impl_name": "ssl", 00:48:29.505 "recv_buf_size": 4096, 00:48:29.505 "send_buf_size": 4096, 00:48:29.505 "enable_recv_pipe": true, 00:48:29.505 "enable_quickack": false, 00:48:29.505 "enable_placement_id": 0, 00:48:29.505 "enable_zerocopy_send_server": true, 00:48:29.505 "enable_zerocopy_send_client": false, 00:48:29.505 "zerocopy_threshold": 0, 00:48:29.505 "tls_version": 0, 00:48:29.505 "enable_ktls": false 00:48:29.505 } 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "method": "sock_impl_set_options", 00:48:29.505 "params": { 00:48:29.505 "impl_name": "posix", 00:48:29.505 "recv_buf_size": 2097152, 00:48:29.505 "send_buf_size": 2097152, 00:48:29.505 "enable_recv_pipe": true, 00:48:29.505 "enable_quickack": false, 00:48:29.505 "enable_placement_id": 0, 00:48:29.505 "enable_zerocopy_send_server": true, 00:48:29.505 "enable_zerocopy_send_client": false, 00:48:29.505 "zerocopy_threshold": 0, 00:48:29.505 "tls_version": 0, 00:48:29.505 "enable_ktls": false 00:48:29.505 } 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "method": "sock_impl_set_options", 00:48:29.505 "params": { 00:48:29.505 "impl_name": "uring", 00:48:29.505 "recv_buf_size": 2097152, 00:48:29.505 "send_buf_size": 2097152, 00:48:29.505 "enable_recv_pipe": true, 00:48:29.505 "enable_quickack": false, 00:48:29.505 "enable_placement_id": 0, 00:48:29.505 "enable_zerocopy_send_server": false, 00:48:29.505 "enable_zerocopy_send_client": false, 00:48:29.505 "zerocopy_threshold": 0, 00:48:29.505 "tls_version": 0, 00:48:29.505 "enable_ktls": false 00:48:29.505 } 00:48:29.505 } 00:48:29.505 ] 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "subsystem": "vmd", 00:48:29.505 "config": [] 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "subsystem": "accel", 00:48:29.505 "config": [ 00:48:29.505 { 00:48:29.505 "method": "accel_set_options", 00:48:29.505 "params": { 00:48:29.505 "small_cache_size": 128, 00:48:29.505 "large_cache_size": 16, 00:48:29.505 "task_count": 2048, 00:48:29.505 "sequence_count": 2048, 00:48:29.505 "buf_count": 2048 00:48:29.505 } 00:48:29.505 } 00:48:29.505 ] 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "subsystem": "bdev", 00:48:29.505 "config": [ 00:48:29.505 { 00:48:29.505 "method": "bdev_set_options", 00:48:29.505 "params": { 00:48:29.505 "bdev_io_pool_size": 65535, 00:48:29.505 "bdev_io_cache_size": 256, 00:48:29.505 "bdev_auto_examine": true, 00:48:29.505 "iobuf_small_cache_size": 128, 00:48:29.505 "iobuf_large_cache_size": 16 00:48:29.505 } 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "method": "bdev_raid_set_options", 00:48:29.505 "params": { 00:48:29.505 "process_window_size_kb": 1024, 00:48:29.505 "process_max_bandwidth_mb_sec": 0 00:48:29.505 } 00:48:29.505 }, 00:48:29.505 { 00:48:29.505 "method": "bdev_iscsi_set_options", 00:48:29.505 "params": { 00:48:29.506 "timeout_sec": 30 00:48:29.506 } 00:48:29.506 }, 00:48:29.506 { 00:48:29.506 "method": "bdev_nvme_set_options", 00:48:29.506 "params": { 00:48:29.506 "action_on_timeout": "none", 00:48:29.506 "timeout_us": 0, 00:48:29.506 "timeout_admin_us": 0, 00:48:29.506 "keep_alive_timeout_ms": 10000, 00:48:29.506 "arbitration_burst": 0, 00:48:29.506 "low_priority_weight": 0, 00:48:29.506 "medium_priority_weight": 0, 00:48:29.506 "high_priority_weight": 0, 00:48:29.506 "nvme_adminq_poll_period_us": 10000, 00:48:29.506 "nvme_ioq_poll_period_us": 0, 00:48:29.506 "io_queue_requests": 512, 00:48:29.506 "delay_cmd_submit": true, 00:48:29.506 "transport_retry_count": 4, 00:48:29.506 "bdev_retry_count": 3, 00:48:29.506 "transport_ack_timeout": 0, 00:48:29.506 "ctrlr_loss_timeout_sec": 0, 00:48:29.506 "reconnect_delay_sec": 0, 00:48:29.506 "fast_io_fail_timeout_sec": 0, 00:48:29.506 "disable_auto_failback": false, 00:48:29.506 "generate_uuids": false, 00:48:29.506 "transport_tos": 0, 00:48:29.506 "nvme_error_stat": false, 00:48:29.506 "rdma_srq_size": 0, 00:48:29.506 "io_path_stat": false, 00:48:29.506 "allow_accel_sequence": false, 00:48:29.506 "rdma_max_cq_size": 0, 00:48:29.506 "rdma_cm_event_timeout_ms": 0, 00:48:29.506 "dhchap_digests": [ 00:48:29.506 "sha256", 00:48:29.506 "sha384", 00:48:29.506 "sha512" 00:48:29.506 ], 00:48:29.506 "dhchap_dhgroups": [ 00:48:29.506 "null", 00:48:29.506 "ffdhe2048", 00:48:29.506 "ffdhe3072", 00:48:29.506 "ffdhe4096", 00:48:29.506 "ffdhe6144", 00:48:29.506 "ffdhe8192" 00:48:29.506 ] 00:48:29.506 } 00:48:29.506 }, 00:48:29.506 { 00:48:29.506 "method": "bdev_nvme_attach_controller", 00:48:29.506 "params": { 00:48:29.506 "name": "TLSTEST", 00:48:29.506 "trtype": "TCP", 00:48:29.506 "adrfam": "IPv4", 00:48:29.506 "traddr": "10.0.0.3", 00:48:29.506 "trsvcid": "4420", 00:48:29.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:29.506 "prchk_reftag": false, 00:48:29.506 "prchk_guard": false, 00:48:29.506 "ctrlr_loss_timeout_sec": 0, 00:48:29.506 "reconnect_delay_sec": 0, 00:48:29.506 "fast_io_fail_timeout_sec": 0, 00:48:29.506 "psk": "key0", 00:48:29.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:29.506 "hdgst": false, 00:48:29.506 "ddgst": false, 00:48:29.506 "multipath": "multipath" 00:48:29.506 } 00:48:29.506 }, 00:48:29.506 { 00:48:29.506 "method": "bdev_nvme_set_hotplug", 00:48:29.506 "params": { 00:48:29.506 "period_us": 100000, 00:48:29.506 "enable": false 00:48:29.506 } 00:48:29.506 }, 00:48:29.506 { 00:48:29.506 "method": "bdev_wait_for_examine" 00:48:29.506 } 00:48:29.506 ] 00:48:29.506 }, 00:48:29.506 { 00:48:29.506 "subsystem": "nbd", 00:48:29.506 "config": [] 00:48:29.506 } 00:48:29.506 ] 00:48:29.506 }' 00:48:29.506 [2024-12-09 10:00:36.946355] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:29.506 [2024-12-09 10:00:36.946602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72146 ] 00:48:29.764 [2024-12-09 10:00:37.097844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:29.764 [2024-12-09 10:00:37.137456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:29.764 [2024-12-09 10:00:37.252669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:30.023 [2024-12-09 10:00:37.288698] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:30.590 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:30.590 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:30.590 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:48:30.849 Running I/O for 10 seconds... 00:48:32.720 3968.00 IOPS, 15.50 MiB/s [2024-12-09T10:00:41.159Z] 4004.00 IOPS, 15.64 MiB/s [2024-12-09T10:00:42.536Z] 4027.33 IOPS, 15.73 MiB/s [2024-12-09T10:00:43.183Z] 4031.75 IOPS, 15.75 MiB/s [2024-12-09T10:00:44.560Z] 4038.00 IOPS, 15.77 MiB/s [2024-12-09T10:00:45.496Z] 4044.33 IOPS, 15.80 MiB/s [2024-12-09T10:00:46.431Z] 4044.71 IOPS, 15.80 MiB/s [2024-12-09T10:00:47.366Z] 4044.38 IOPS, 15.80 MiB/s [2024-12-09T10:00:48.303Z] 4045.78 IOPS, 15.80 MiB/s [2024-12-09T10:00:48.303Z] 4046.50 IOPS, 15.81 MiB/s 00:48:40.801 Latency(us) 00:48:40.801 [2024-12-09T10:00:48.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:40.801 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:48:40.801 Verification LBA range: start 0x0 length 0x2000 00:48:40.801 TLSTESTn1 : 10.02 4052.26 15.83 0.00 0.00 31528.47 5659.93 23831.27 00:48:40.801 [2024-12-09T10:00:48.303Z] =================================================================================================================== 00:48:40.801 [2024-12-09T10:00:48.303Z] Total : 4052.26 15.83 0.00 0.00 31528.47 5659.93 23831.27 00:48:40.801 { 00:48:40.801 "results": [ 00:48:40.801 { 00:48:40.801 "job": "TLSTESTn1", 00:48:40.801 "core_mask": "0x4", 00:48:40.801 "workload": "verify", 00:48:40.801 "status": "finished", 00:48:40.801 "verify_range": { 00:48:40.801 "start": 0, 00:48:40.801 "length": 8192 00:48:40.801 }, 00:48:40.801 "queue_depth": 128, 00:48:40.801 "io_size": 4096, 00:48:40.801 "runtime": 10.016876, 00:48:40.801 "iops": 4052.2614036551913, 00:48:40.801 "mibps": 15.829146108028091, 00:48:40.801 "io_failed": 0, 00:48:40.801 "io_timeout": 0, 00:48:40.801 "avg_latency_us": 31528.47153130676, 00:48:40.801 "min_latency_us": 5659.927272727273, 00:48:40.801 "max_latency_us": 23831.272727272728 00:48:40.801 } 00:48:40.801 ], 00:48:40.801 "core_count": 1 00:48:40.801 } 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72146 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72146 ']' 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72146 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72146 00:48:40.801 killing process with pid 72146 00:48:40.801 Received shutdown signal, test time was about 10.000000 seconds 00:48:40.801 00:48:40.801 Latency(us) 00:48:40.801 [2024-12-09T10:00:48.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:40.801 [2024-12-09T10:00:48.303Z] =================================================================================================================== 00:48:40.801 [2024-12-09T10:00:48.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72146' 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72146 00:48:40.801 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72146 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72114 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72114 ']' 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72114 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72114 00:48:41.061 killing process with pid 72114 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72114' 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72114 00:48:41.061 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72114 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72283 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72283 00:48:41.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72283 ']' 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:41.319 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:41.319 [2024-12-09 10:00:48.700631] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:41.319 [2024-12-09 10:00:48.700872] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:41.585 [2024-12-09 10:00:48.852122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:41.585 [2024-12-09 10:00:48.883552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:41.585 [2024-12-09 10:00:48.883609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:41.585 [2024-12-09 10:00:48.883622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:41.585 [2024-12-09 10:00:48.883630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:41.585 [2024-12-09 10:00:48.883637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:41.585 [2024-12-09 10:00:48.883948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:41.585 [2024-12-09 10:00:48.913352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:41.585 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:41.585 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:41.585 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:41.585 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:41.585 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:41.585 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:41.585 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.pnDhFlvg6S 00:48:41.585 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pnDhFlvg6S 00:48:41.585 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:48:41.845 [2024-12-09 10:00:49.245414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:41.845 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:48:42.103 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:48:42.669 [2024-12-09 10:00:49.865548] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:42.669 [2024-12-09 10:00:49.865927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:42.669 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:48:42.927 malloc0 00:48:42.927 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:48:43.186 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pnDhFlvg6S 00:48:43.443 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:48:43.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72338 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72338 /var/tmp/bdevperf.sock 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72338 ']' 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:43.700 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:43.700 [2024-12-09 10:00:51.038048] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:43.700 [2024-12-09 10:00:51.038437] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72338 ] 00:48:43.700 [2024-12-09 10:00:51.199359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:43.959 [2024-12-09 10:00:51.232648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:43.959 [2024-12-09 10:00:51.262771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:43.959 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:43.959 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:43.959 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnDhFlvg6S 00:48:44.217 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:48:44.485 [2024-12-09 10:00:51.951306] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:44.760 nvme0n1 00:48:44.760 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:48:44.760 Running I/O for 1 seconds... 00:48:45.694 3938.00 IOPS, 15.38 MiB/s 00:48:45.694 Latency(us) 00:48:45.694 [2024-12-09T10:00:53.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:45.694 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:48:45.694 Verification LBA range: start 0x0 length 0x2000 00:48:45.694 nvme0n1 : 1.03 3955.56 15.45 0.00 0.00 31937.68 8519.68 21448.15 00:48:45.694 [2024-12-09T10:00:53.196Z] =================================================================================================================== 00:48:45.694 [2024-12-09T10:00:53.196Z] Total : 3955.56 15.45 0.00 0.00 31937.68 8519.68 21448.15 00:48:45.694 { 00:48:45.694 "results": [ 00:48:45.694 { 00:48:45.694 "job": "nvme0n1", 00:48:45.694 "core_mask": "0x2", 00:48:45.694 "workload": "verify", 00:48:45.694 "status": "finished", 00:48:45.694 "verify_range": { 00:48:45.694 "start": 0, 00:48:45.694 "length": 8192 00:48:45.694 }, 00:48:45.694 "queue_depth": 128, 00:48:45.694 "io_size": 4096, 00:48:45.694 "runtime": 1.02792, 00:48:45.694 "iops": 3955.5607440267727, 00:48:45.694 "mibps": 15.45140915635458, 00:48:45.694 "io_failed": 0, 00:48:45.694 "io_timeout": 0, 00:48:45.694 "avg_latency_us": 31937.678544023613, 00:48:45.694 "min_latency_us": 8519.68, 00:48:45.694 "max_latency_us": 21448.145454545454 00:48:45.694 } 00:48:45.694 ], 00:48:45.694 "core_count": 1 00:48:45.694 } 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72338 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72338 ']' 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72338 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72338 00:48:45.952 killing process with pid 72338 00:48:45.952 Received shutdown signal, test time was about 1.000000 seconds 00:48:45.952 00:48:45.952 Latency(us) 00:48:45.952 [2024-12-09T10:00:53.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:45.952 [2024-12-09T10:00:53.454Z] =================================================================================================================== 00:48:45.952 [2024-12-09T10:00:53.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72338' 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72338 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72338 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72283 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72283 ']' 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72283 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:45.952 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72283 00:48:46.210 killing process with pid 72283 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72283' 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72283 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72283 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72376 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72376 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72376 ']' 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:46.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:46.210 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:46.210 [2024-12-09 10:00:53.707868] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:46.210 [2024-12-09 10:00:53.707983] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:46.468 [2024-12-09 10:00:53.860738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:46.468 [2024-12-09 10:00:53.892395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:46.468 [2024-12-09 10:00:53.892457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:46.468 [2024-12-09 10:00:53.892470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:46.468 [2024-12-09 10:00:53.892478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:46.468 [2024-12-09 10:00:53.892485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:46.468 [2024-12-09 10:00:53.892802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:46.468 [2024-12-09 10:00:53.922656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:46.736 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:46.736 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:46.736 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:46.736 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:46.736 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:46.736 [2024-12-09 10:00:54.015236] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:46.736 malloc0 00:48:46.736 [2024-12-09 10:00:54.041683] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:46.736 [2024-12-09 10:00:54.042120] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72401 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72401 /var/tmp/bdevperf.sock 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72401 ']' 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:46.736 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:46.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:46.737 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:46.737 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:46.737 [2024-12-09 10:00:54.119920] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:46.737 [2024-12-09 10:00:54.120209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72401 ] 00:48:46.995 [2024-12-09 10:00:54.277435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:46.995 [2024-12-09 10:00:54.336599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:46.995 [2024-12-09 10:00:54.370193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:46.995 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:46.995 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:46.995 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pnDhFlvg6S 00:48:47.253 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:48:47.511 [2024-12-09 10:00:54.970739] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:47.770 nvme0n1 00:48:47.770 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:48:47.770 Running I/O for 1 seconds... 00:48:48.832 3845.00 IOPS, 15.02 MiB/s 00:48:48.832 Latency(us) 00:48:48.832 [2024-12-09T10:00:56.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:48.832 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:48:48.832 Verification LBA range: start 0x0 length 0x2000 00:48:48.832 nvme0n1 : 1.02 3901.46 15.24 0.00 0.00 32426.67 592.06 21090.68 00:48:48.832 [2024-12-09T10:00:56.334Z] =================================================================================================================== 00:48:48.832 [2024-12-09T10:00:56.334Z] Total : 3901.46 15.24 0.00 0.00 32426.67 592.06 21090.68 00:48:48.832 { 00:48:48.832 "results": [ 00:48:48.832 { 00:48:48.832 "job": "nvme0n1", 00:48:48.832 "core_mask": "0x2", 00:48:48.832 "workload": "verify", 00:48:48.832 "status": "finished", 00:48:48.832 "verify_range": { 00:48:48.832 "start": 0, 00:48:48.832 "length": 8192 00:48:48.832 }, 00:48:48.832 "queue_depth": 128, 00:48:48.832 "io_size": 4096, 00:48:48.832 "runtime": 1.018338, 00:48:48.832 "iops": 3901.455116081301, 00:48:48.832 "mibps": 15.240059047192583, 00:48:48.832 "io_failed": 0, 00:48:48.832 "io_timeout": 0, 00:48:48.832 "avg_latency_us": 32426.669634578862, 00:48:48.832 "min_latency_us": 592.0581818181818, 00:48:48.832 "max_latency_us": 21090.676363636365 00:48:48.832 } 00:48:48.832 ], 00:48:48.832 "core_count": 1 00:48:48.832 } 00:48:48.832 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:48:48.832 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:48.832 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:49.091 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:49.091 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:48:49.091 "subsystems": [ 00:48:49.091 { 00:48:49.091 "subsystem": "keyring", 00:48:49.091 "config": [ 00:48:49.091 { 00:48:49.091 "method": "keyring_file_add_key", 00:48:49.091 "params": { 00:48:49.091 "name": "key0", 00:48:49.091 "path": "/tmp/tmp.pnDhFlvg6S" 00:48:49.091 } 00:48:49.091 } 00:48:49.091 ] 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "subsystem": "iobuf", 00:48:49.091 "config": [ 00:48:49.091 { 00:48:49.091 "method": "iobuf_set_options", 00:48:49.091 "params": { 00:48:49.091 "small_pool_count": 8192, 00:48:49.091 "large_pool_count": 1024, 00:48:49.091 "small_bufsize": 8192, 00:48:49.091 "large_bufsize": 135168, 00:48:49.091 "enable_numa": false 00:48:49.091 } 00:48:49.091 } 00:48:49.091 ] 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "subsystem": "sock", 00:48:49.091 "config": [ 00:48:49.091 { 00:48:49.091 "method": "sock_set_default_impl", 00:48:49.091 "params": { 00:48:49.091 "impl_name": "uring" 00:48:49.091 } 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "method": "sock_impl_set_options", 00:48:49.091 "params": { 00:48:49.091 "impl_name": "ssl", 00:48:49.091 "recv_buf_size": 4096, 00:48:49.091 "send_buf_size": 4096, 00:48:49.091 "enable_recv_pipe": true, 00:48:49.091 "enable_quickack": false, 00:48:49.091 "enable_placement_id": 0, 00:48:49.091 "enable_zerocopy_send_server": true, 00:48:49.091 "enable_zerocopy_send_client": false, 00:48:49.091 "zerocopy_threshold": 0, 00:48:49.091 "tls_version": 0, 00:48:49.091 "enable_ktls": false 00:48:49.091 } 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "method": "sock_impl_set_options", 00:48:49.091 "params": { 00:48:49.091 "impl_name": "posix", 00:48:49.091 "recv_buf_size": 2097152, 00:48:49.091 "send_buf_size": 2097152, 00:48:49.091 "enable_recv_pipe": true, 00:48:49.091 "enable_quickack": false, 00:48:49.091 "enable_placement_id": 0, 00:48:49.091 "enable_zerocopy_send_server": true, 00:48:49.091 "enable_zerocopy_send_client": false, 00:48:49.091 "zerocopy_threshold": 0, 00:48:49.091 "tls_version": 0, 00:48:49.091 "enable_ktls": false 00:48:49.091 } 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "method": "sock_impl_set_options", 00:48:49.091 "params": { 00:48:49.091 "impl_name": "uring", 00:48:49.091 "recv_buf_size": 2097152, 00:48:49.091 "send_buf_size": 2097152, 00:48:49.091 "enable_recv_pipe": true, 00:48:49.091 "enable_quickack": false, 00:48:49.091 "enable_placement_id": 0, 00:48:49.091 "enable_zerocopy_send_server": false, 00:48:49.091 "enable_zerocopy_send_client": false, 00:48:49.091 "zerocopy_threshold": 0, 00:48:49.091 "tls_version": 0, 00:48:49.091 "enable_ktls": false 00:48:49.091 } 00:48:49.091 } 00:48:49.091 ] 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "subsystem": "vmd", 00:48:49.091 "config": [] 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "subsystem": "accel", 00:48:49.091 "config": [ 00:48:49.091 { 00:48:49.091 "method": "accel_set_options", 00:48:49.091 "params": { 00:48:49.091 "small_cache_size": 128, 00:48:49.091 "large_cache_size": 16, 00:48:49.091 "task_count": 2048, 00:48:49.091 "sequence_count": 2048, 00:48:49.091 "buf_count": 2048 00:48:49.091 } 00:48:49.091 } 00:48:49.091 ] 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "subsystem": "bdev", 00:48:49.091 "config": [ 00:48:49.091 { 00:48:49.091 "method": "bdev_set_options", 00:48:49.091 "params": { 00:48:49.091 "bdev_io_pool_size": 65535, 00:48:49.091 "bdev_io_cache_size": 256, 00:48:49.091 "bdev_auto_examine": true, 00:48:49.091 "iobuf_small_cache_size": 128, 00:48:49.091 "iobuf_large_cache_size": 16 00:48:49.091 } 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "method": "bdev_raid_set_options", 00:48:49.091 "params": { 00:48:49.091 "process_window_size_kb": 1024, 00:48:49.091 "process_max_bandwidth_mb_sec": 0 00:48:49.091 } 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "method": "bdev_iscsi_set_options", 00:48:49.091 "params": { 00:48:49.091 "timeout_sec": 30 00:48:49.091 } 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "method": "bdev_nvme_set_options", 00:48:49.091 "params": { 00:48:49.091 "action_on_timeout": "none", 00:48:49.091 "timeout_us": 0, 00:48:49.091 "timeout_admin_us": 0, 00:48:49.091 "keep_alive_timeout_ms": 10000, 00:48:49.091 "arbitration_burst": 0, 00:48:49.091 "low_priority_weight": 0, 00:48:49.091 "medium_priority_weight": 0, 00:48:49.091 "high_priority_weight": 0, 00:48:49.091 "nvme_adminq_poll_period_us": 10000, 00:48:49.091 "nvme_ioq_poll_period_us": 0, 00:48:49.091 "io_queue_requests": 0, 00:48:49.091 "delay_cmd_submit": true, 00:48:49.091 "transport_retry_count": 4, 00:48:49.091 "bdev_retry_count": 3, 00:48:49.091 "transport_ack_timeout": 0, 00:48:49.091 "ctrlr_loss_timeout_sec": 0, 00:48:49.091 "reconnect_delay_sec": 0, 00:48:49.091 "fast_io_fail_timeout_sec": 0, 00:48:49.091 "disable_auto_failback": false, 00:48:49.091 "generate_uuids": false, 00:48:49.091 "transport_tos": 0, 00:48:49.091 "nvme_error_stat": false, 00:48:49.091 "rdma_srq_size": 0, 00:48:49.091 "io_path_stat": false, 00:48:49.091 "allow_accel_sequence": false, 00:48:49.091 "rdma_max_cq_size": 0, 00:48:49.091 "rdma_cm_event_timeout_ms": 0, 00:48:49.091 "dhchap_digests": [ 00:48:49.091 "sha256", 00:48:49.091 "sha384", 00:48:49.091 "sha512" 00:48:49.091 ], 00:48:49.091 "dhchap_dhgroups": [ 00:48:49.091 "null", 00:48:49.091 "ffdhe2048", 00:48:49.091 "ffdhe3072", 00:48:49.091 "ffdhe4096", 00:48:49.091 "ffdhe6144", 00:48:49.091 "ffdhe8192" 00:48:49.091 ] 00:48:49.091 } 00:48:49.091 }, 00:48:49.091 { 00:48:49.091 "method": "bdev_nvme_set_hotplug", 00:48:49.091 "params": { 00:48:49.091 "period_us": 100000, 00:48:49.091 "enable": false 00:48:49.091 } 00:48:49.091 }, 00:48:49.091 { 00:48:49.092 "method": "bdev_malloc_create", 00:48:49.092 "params": { 00:48:49.092 "name": "malloc0", 00:48:49.092 "num_blocks": 8192, 00:48:49.092 "block_size": 4096, 00:48:49.092 "physical_block_size": 4096, 00:48:49.092 "uuid": "228b8029-0c2a-471d-9d67-7b6746977e4a", 00:48:49.092 "optimal_io_boundary": 0, 00:48:49.092 "md_size": 0, 00:48:49.092 "dif_type": 0, 00:48:49.092 "dif_is_head_of_md": false, 00:48:49.092 "dif_pi_format": 0 00:48:49.092 } 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "method": "bdev_wait_for_examine" 00:48:49.092 } 00:48:49.092 ] 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "subsystem": "nbd", 00:48:49.092 "config": [] 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "subsystem": "scheduler", 00:48:49.092 "config": [ 00:48:49.092 { 00:48:49.092 "method": "framework_set_scheduler", 00:48:49.092 "params": { 00:48:49.092 "name": "static" 00:48:49.092 } 00:48:49.092 } 00:48:49.092 ] 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "subsystem": "nvmf", 00:48:49.092 "config": [ 00:48:49.092 { 00:48:49.092 "method": "nvmf_set_config", 00:48:49.092 "params": { 00:48:49.092 "discovery_filter": "match_any", 00:48:49.092 "admin_cmd_passthru": { 00:48:49.092 "identify_ctrlr": false 00:48:49.092 }, 00:48:49.092 "dhchap_digests": [ 00:48:49.092 "sha256", 00:48:49.092 "sha384", 00:48:49.092 "sha512" 00:48:49.092 ], 00:48:49.092 "dhchap_dhgroups": [ 00:48:49.092 "null", 00:48:49.092 "ffdhe2048", 00:48:49.092 "ffdhe3072", 00:48:49.092 "ffdhe4096", 00:48:49.092 "ffdhe6144", 00:48:49.092 "ffdhe8192" 00:48:49.092 ] 00:48:49.092 } 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "method": "nvmf_set_max_subsystems", 00:48:49.092 "params": { 00:48:49.092 "max_subsystems": 1024 00:48:49.092 } 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "method": "nvmf_set_crdt", 00:48:49.092 "params": { 00:48:49.092 "crdt1": 0, 00:48:49.092 "crdt2": 0, 00:48:49.092 "crdt3": 0 00:48:49.092 } 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "method": "nvmf_create_transport", 00:48:49.092 "params": { 00:48:49.092 "trtype": "TCP", 00:48:49.092 "max_queue_depth": 128, 00:48:49.092 "max_io_qpairs_per_ctrlr": 127, 00:48:49.092 "in_capsule_data_size": 4096, 00:48:49.092 "max_io_size": 131072, 00:48:49.092 "io_unit_size": 131072, 00:48:49.092 "max_aq_depth": 128, 00:48:49.092 "num_shared_buffers": 511, 00:48:49.092 "buf_cache_size": 4294967295, 00:48:49.092 "dif_insert_or_strip": false, 00:48:49.092 "zcopy": false, 00:48:49.092 "c2h_success": false, 00:48:49.092 "sock_priority": 0, 00:48:49.092 "abort_timeout_sec": 1, 00:48:49.092 "ack_timeout": 0, 00:48:49.092 "data_wr_pool_size": 0 00:48:49.092 } 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "method": "nvmf_create_subsystem", 00:48:49.092 "params": { 00:48:49.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:49.092 "allow_any_host": false, 00:48:49.092 "serial_number": "00000000000000000000", 00:48:49.092 "model_number": "SPDK bdev Controller", 00:48:49.092 "max_namespaces": 32, 00:48:49.092 "min_cntlid": 1, 00:48:49.092 "max_cntlid": 65519, 00:48:49.092 "ana_reporting": false 00:48:49.092 } 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "method": "nvmf_subsystem_add_host", 00:48:49.092 "params": { 00:48:49.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:49.092 "host": "nqn.2016-06.io.spdk:host1", 00:48:49.092 "psk": "key0" 00:48:49.092 } 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "method": "nvmf_subsystem_add_ns", 00:48:49.092 "params": { 00:48:49.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:49.092 "namespace": { 00:48:49.092 "nsid": 1, 00:48:49.092 "bdev_name": "malloc0", 00:48:49.092 "nguid": "228B80290C2A471D9D677B6746977E4A", 00:48:49.092 "uuid": "228b8029-0c2a-471d-9d67-7b6746977e4a", 00:48:49.092 "no_auto_visible": false 00:48:49.092 } 00:48:49.092 } 00:48:49.092 }, 00:48:49.092 { 00:48:49.092 "method": "nvmf_subsystem_add_listener", 00:48:49.092 "params": { 00:48:49.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:49.092 "listen_address": { 00:48:49.092 "trtype": "TCP", 00:48:49.092 "adrfam": "IPv4", 00:48:49.092 "traddr": "10.0.0.3", 00:48:49.092 "trsvcid": "4420" 00:48:49.092 }, 00:48:49.092 "secure_channel": false, 00:48:49.092 "sock_impl": "ssl" 00:48:49.092 } 00:48:49.092 } 00:48:49.092 ] 00:48:49.092 } 00:48:49.092 ] 00:48:49.092 }' 00:48:49.092 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:48:49.351 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:48:49.351 "subsystems": [ 00:48:49.351 { 00:48:49.351 "subsystem": "keyring", 00:48:49.351 "config": [ 00:48:49.351 { 00:48:49.351 "method": "keyring_file_add_key", 00:48:49.351 "params": { 00:48:49.351 "name": "key0", 00:48:49.351 "path": "/tmp/tmp.pnDhFlvg6S" 00:48:49.351 } 00:48:49.351 } 00:48:49.351 ] 00:48:49.351 }, 00:48:49.351 { 00:48:49.351 "subsystem": "iobuf", 00:48:49.351 "config": [ 00:48:49.351 { 00:48:49.351 "method": "iobuf_set_options", 00:48:49.351 "params": { 00:48:49.351 "small_pool_count": 8192, 00:48:49.351 "large_pool_count": 1024, 00:48:49.351 "small_bufsize": 8192, 00:48:49.351 "large_bufsize": 135168, 00:48:49.351 "enable_numa": false 00:48:49.351 } 00:48:49.351 } 00:48:49.351 ] 00:48:49.351 }, 00:48:49.351 { 00:48:49.351 "subsystem": "sock", 00:48:49.351 "config": [ 00:48:49.351 { 00:48:49.351 "method": "sock_set_default_impl", 00:48:49.351 "params": { 00:48:49.351 "impl_name": "uring" 00:48:49.351 } 00:48:49.351 }, 00:48:49.351 { 00:48:49.351 "method": "sock_impl_set_options", 00:48:49.351 "params": { 00:48:49.351 "impl_name": "ssl", 00:48:49.351 "recv_buf_size": 4096, 00:48:49.351 "send_buf_size": 4096, 00:48:49.351 "enable_recv_pipe": true, 00:48:49.351 "enable_quickack": false, 00:48:49.351 "enable_placement_id": 0, 00:48:49.351 "enable_zerocopy_send_server": true, 00:48:49.351 "enable_zerocopy_send_client": false, 00:48:49.351 "zerocopy_threshold": 0, 00:48:49.351 "tls_version": 0, 00:48:49.351 "enable_ktls": false 00:48:49.351 } 00:48:49.351 }, 00:48:49.351 { 00:48:49.351 "method": "sock_impl_set_options", 00:48:49.351 "params": { 00:48:49.351 "impl_name": "posix", 00:48:49.351 "recv_buf_size": 2097152, 00:48:49.351 "send_buf_size": 2097152, 00:48:49.351 "enable_recv_pipe": true, 00:48:49.351 "enable_quickack": false, 00:48:49.351 "enable_placement_id": 0, 00:48:49.351 "enable_zerocopy_send_server": true, 00:48:49.351 "enable_zerocopy_send_client": false, 00:48:49.351 "zerocopy_threshold": 0, 00:48:49.351 "tls_version": 0, 00:48:49.351 "enable_ktls": false 00:48:49.351 } 00:48:49.351 }, 00:48:49.351 { 00:48:49.351 "method": "sock_impl_set_options", 00:48:49.351 "params": { 00:48:49.351 "impl_name": "uring", 00:48:49.351 "recv_buf_size": 2097152, 00:48:49.351 "send_buf_size": 2097152, 00:48:49.351 "enable_recv_pipe": true, 00:48:49.351 "enable_quickack": false, 00:48:49.351 "enable_placement_id": 0, 00:48:49.351 "enable_zerocopy_send_server": false, 00:48:49.351 "enable_zerocopy_send_client": false, 00:48:49.351 "zerocopy_threshold": 0, 00:48:49.351 "tls_version": 0, 00:48:49.351 "enable_ktls": false 00:48:49.351 } 00:48:49.351 } 00:48:49.351 ] 00:48:49.351 }, 00:48:49.351 { 00:48:49.351 "subsystem": "vmd", 00:48:49.351 "config": [] 00:48:49.351 }, 00:48:49.351 { 00:48:49.351 "subsystem": "accel", 00:48:49.351 "config": [ 00:48:49.351 { 00:48:49.351 "method": "accel_set_options", 00:48:49.351 "params": { 00:48:49.351 "small_cache_size": 128, 00:48:49.352 "large_cache_size": 16, 00:48:49.352 "task_count": 2048, 00:48:49.352 "sequence_count": 2048, 00:48:49.352 "buf_count": 2048 00:48:49.352 } 00:48:49.352 } 00:48:49.352 ] 00:48:49.352 }, 00:48:49.352 { 00:48:49.352 "subsystem": "bdev", 00:48:49.352 "config": [ 00:48:49.352 { 00:48:49.352 "method": "bdev_set_options", 00:48:49.352 "params": { 00:48:49.352 "bdev_io_pool_size": 65535, 00:48:49.352 "bdev_io_cache_size": 256, 00:48:49.352 "bdev_auto_examine": true, 00:48:49.352 "iobuf_small_cache_size": 128, 00:48:49.352 "iobuf_large_cache_size": 16 00:48:49.352 } 00:48:49.352 }, 00:48:49.352 { 00:48:49.352 "method": "bdev_raid_set_options", 00:48:49.352 "params": { 00:48:49.352 "process_window_size_kb": 1024, 00:48:49.352 "process_max_bandwidth_mb_sec": 0 00:48:49.352 } 00:48:49.352 }, 00:48:49.352 { 00:48:49.352 "method": "bdev_iscsi_set_options", 00:48:49.352 "params": { 00:48:49.352 "timeout_sec": 30 00:48:49.352 } 00:48:49.352 }, 00:48:49.352 { 00:48:49.352 "method": "bdev_nvme_set_options", 00:48:49.352 "params": { 00:48:49.352 "action_on_timeout": "none", 00:48:49.352 "timeout_us": 0, 00:48:49.352 "timeout_admin_us": 0, 00:48:49.352 "keep_alive_timeout_ms": 10000, 00:48:49.352 "arbitration_burst": 0, 00:48:49.352 "low_priority_weight": 0, 00:48:49.352 "medium_priority_weight": 0, 00:48:49.352 "high_priority_weight": 0, 00:48:49.352 "nvme_adminq_poll_period_us": 10000, 00:48:49.352 "nvme_ioq_poll_period_us": 0, 00:48:49.352 "io_queue_requests": 512, 00:48:49.352 "delay_cmd_submit": true, 00:48:49.352 "transport_retry_count": 4, 00:48:49.352 "bdev_retry_count": 3, 00:48:49.352 "transport_ack_timeout": 0, 00:48:49.352 "ctrlr_loss_timeout_sec": 0, 00:48:49.352 "reconnect_delay_sec": 0, 00:48:49.352 "fast_io_fail_timeout_sec": 0, 00:48:49.352 "disable_auto_failback": false, 00:48:49.352 "generate_uuids": false, 00:48:49.352 "transport_tos": 0, 00:48:49.352 "nvme_error_stat": false, 00:48:49.352 "rdma_srq_size": 0, 00:48:49.352 "io_path_stat": false, 00:48:49.352 "allow_accel_sequence": false, 00:48:49.352 "rdma_max_cq_size": 0, 00:48:49.352 "rdma_cm_event_timeout_ms": 0, 00:48:49.352 "dhchap_digests": [ 00:48:49.352 "sha256", 00:48:49.352 "sha384", 00:48:49.352 "sha512" 00:48:49.352 ], 00:48:49.352 "dhchap_dhgroups": [ 00:48:49.352 "null", 00:48:49.352 "ffdhe2048", 00:48:49.352 "ffdhe3072", 00:48:49.352 "ffdhe4096", 00:48:49.352 "ffdhe6144", 00:48:49.352 "ffdhe8192" 00:48:49.352 ] 00:48:49.352 } 00:48:49.352 }, 00:48:49.352 { 00:48:49.352 "method": "bdev_nvme_attach_controller", 00:48:49.352 "params": { 00:48:49.352 "name": "nvme0", 00:48:49.352 "trtype": "TCP", 00:48:49.352 "adrfam": "IPv4", 00:48:49.352 "traddr": "10.0.0.3", 00:48:49.352 "trsvcid": "4420", 00:48:49.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:49.352 "prchk_reftag": false, 00:48:49.352 "prchk_guard": false, 00:48:49.352 "ctrlr_loss_timeout_sec": 0, 00:48:49.352 "reconnect_delay_sec": 0, 00:48:49.352 "fast_io_fail_timeout_sec": 0, 00:48:49.352 "psk": "key0", 00:48:49.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:49.352 "hdgst": false, 00:48:49.352 "ddgst": false, 00:48:49.352 "multipath": "multipath" 00:48:49.352 } 00:48:49.352 }, 00:48:49.352 { 00:48:49.352 "method": "bdev_nvme_set_hotplug", 00:48:49.352 "params": { 00:48:49.352 "period_us": 100000, 00:48:49.352 "enable": false 00:48:49.352 } 00:48:49.352 }, 00:48:49.352 { 00:48:49.352 "method": "bdev_enable_histogram", 00:48:49.352 "params": { 00:48:49.352 "name": "nvme0n1", 00:48:49.352 "enable": true 00:48:49.352 } 00:48:49.352 }, 00:48:49.352 { 00:48:49.352 "method": "bdev_wait_for_examine" 00:48:49.352 } 00:48:49.352 ] 00:48:49.352 }, 00:48:49.352 { 00:48:49.352 "subsystem": "nbd", 00:48:49.352 "config": [] 00:48:49.352 } 00:48:49.352 ] 00:48:49.352 }' 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72401 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72401 ']' 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72401 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72401 00:48:49.352 killing process with pid 72401 00:48:49.352 Received shutdown signal, test time was about 1.000000 seconds 00:48:49.352 00:48:49.352 Latency(us) 00:48:49.352 [2024-12-09T10:00:56.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:49.352 [2024-12-09T10:00:56.854Z] =================================================================================================================== 00:48:49.352 [2024-12-09T10:00:56.854Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72401' 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72401 00:48:49.352 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72401 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72376 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72376 ']' 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72376 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72376 00:48:49.611 killing process with pid 72376 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72376' 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72376 00:48:49.611 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72376 00:48:49.871 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:48:49.871 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:49.871 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:49.871 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:48:49.871 "subsystems": [ 00:48:49.871 { 00:48:49.871 "subsystem": "keyring", 00:48:49.871 "config": [ 00:48:49.871 { 00:48:49.871 "method": "keyring_file_add_key", 00:48:49.871 "params": { 00:48:49.871 "name": "key0", 00:48:49.871 "path": "/tmp/tmp.pnDhFlvg6S" 00:48:49.871 } 00:48:49.871 } 00:48:49.871 ] 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "subsystem": "iobuf", 00:48:49.871 "config": [ 00:48:49.871 { 00:48:49.871 "method": "iobuf_set_options", 00:48:49.871 "params": { 00:48:49.871 "small_pool_count": 8192, 00:48:49.871 "large_pool_count": 1024, 00:48:49.871 "small_bufsize": 8192, 00:48:49.871 "large_bufsize": 135168, 00:48:49.871 "enable_numa": false 00:48:49.871 } 00:48:49.871 } 00:48:49.871 ] 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "subsystem": "sock", 00:48:49.871 "config": [ 00:48:49.871 { 00:48:49.871 "method": "sock_set_default_impl", 00:48:49.871 "params": { 00:48:49.871 "impl_name": "uring" 00:48:49.871 } 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "method": "sock_impl_set_options", 00:48:49.871 "params": { 00:48:49.871 "impl_name": "ssl", 00:48:49.871 "recv_buf_size": 4096, 00:48:49.871 "send_buf_size": 4096, 00:48:49.871 "enable_recv_pipe": true, 00:48:49.871 "enable_quickack": false, 00:48:49.871 "enable_placement_id": 0, 00:48:49.871 "enable_zerocopy_send_server": true, 00:48:49.871 "enable_zerocopy_send_client": false, 00:48:49.871 "zerocopy_threshold": 0, 00:48:49.871 "tls_version": 0, 00:48:49.871 "enable_ktls": false 00:48:49.871 } 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "method": "sock_impl_set_options", 00:48:49.871 "params": { 00:48:49.871 "impl_name": "posix", 00:48:49.871 "recv_buf_size": 2097152, 00:48:49.871 "send_buf_size": 2097152, 00:48:49.871 "enable_recv_pipe": true, 00:48:49.871 "enable_quickack": false, 00:48:49.871 "enable_placement_id": 0, 00:48:49.871 "enable_zerocopy_send_server": true, 00:48:49.871 "enable_zerocopy_send_client": false, 00:48:49.871 "zerocopy_threshold": 0, 00:48:49.871 "tls_version": 0, 00:48:49.871 "enable_ktls": false 00:48:49.871 } 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "method": "sock_impl_set_options", 00:48:49.871 "params": { 00:48:49.871 "impl_name": "uring", 00:48:49.871 "recv_buf_size": 2097152, 00:48:49.871 "send_buf_size": 2097152, 00:48:49.871 "enable_recv_pipe": true, 00:48:49.871 "enable_quickack": false, 00:48:49.871 "enable_placement_id": 0, 00:48:49.871 "enable_zerocopy_send_server": false, 00:48:49.871 "enable_zerocopy_send_client": false, 00:48:49.871 "zerocopy_threshold": 0, 00:48:49.871 "tls_version": 0, 00:48:49.871 "enable_ktls": false 00:48:49.871 } 00:48:49.871 } 00:48:49.871 ] 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "subsystem": "vmd", 00:48:49.871 "config": [] 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "subsystem": "accel", 00:48:49.871 "config": [ 00:48:49.871 { 00:48:49.871 "method": "accel_set_options", 00:48:49.871 "params": { 00:48:49.871 "small_cache_size": 128, 00:48:49.871 "large_cache_size": 16, 00:48:49.871 "task_count": 2048, 00:48:49.871 "sequence_count": 2048, 00:48:49.871 "buf_count": 2048 00:48:49.871 } 00:48:49.871 } 00:48:49.871 ] 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "subsystem": "bdev", 00:48:49.871 "config": [ 00:48:49.871 { 00:48:49.871 "method": "bdev_set_options", 00:48:49.871 "params": { 00:48:49.871 "bdev_io_pool_size": 65535, 00:48:49.871 "bdev_io_cache_size": 256, 00:48:49.871 "bdev_auto_examine": true, 00:48:49.871 "iobuf_small_cache_size": 128, 00:48:49.871 "iobuf_large_cache_size": 16 00:48:49.871 } 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "method": "bdev_raid_set_options", 00:48:49.871 "params": { 00:48:49.871 "process_window_size_kb": 1024, 00:48:49.871 "process_max_bandwidth_mb_sec": 0 00:48:49.871 } 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "method": "bdev_iscsi_set_options", 00:48:49.871 "params": { 00:48:49.871 "timeout_sec": 30 00:48:49.871 } 00:48:49.871 }, 00:48:49.871 { 00:48:49.871 "method": "bdev_nvme_set_options", 00:48:49.871 "params": { 00:48:49.871 "action_on_timeout": "none", 00:48:49.871 "timeout_us": 0, 00:48:49.871 "timeout_admin_us": 0, 00:48:49.871 "keep_alive_timeout_ms": 10000, 00:48:49.871 "arbitration_burst": 0, 00:48:49.871 "low_priority_weight": 0, 00:48:49.871 "medium_priority_weight": 0, 00:48:49.871 "high_priority_weight": 0, 00:48:49.871 "nvme_adminq_poll_period_us": 10000, 00:48:49.871 "nvme_ioq_poll_period_us": 0, 00:48:49.871 "io_queue_requests": 0, 00:48:49.871 "delay_cmd_submit": true, 00:48:49.871 "transport_retry_count": 4, 00:48:49.871 "bdev_retry_count": 3, 00:48:49.871 "transport_ack_timeout": 0, 00:48:49.871 "ctrlr_loss_timeout_sec": 0, 00:48:49.871 "reconnect_delay_sec": 0, 00:48:49.871 "fast_io_fail_timeout_sec": 0, 00:48:49.871 "disable_auto_failback": false, 00:48:49.871 "generate_uuids": false, 00:48:49.871 "transport_tos": 0, 00:48:49.871 "nvme_error_stat": false, 00:48:49.871 "rdma_srq_size": 0, 00:48:49.871 "io_path_stat": false, 00:48:49.871 "allow_accel_sequence": false, 00:48:49.871 "rdma_max_cq_size": 0, 00:48:49.871 "rdma_cm_event_timeout_ms": 0, 00:48:49.871 "dhchap_digests": [ 00:48:49.871 "sha256", 00:48:49.871 "sha384", 00:48:49.871 "sha512" 00:48:49.871 ], 00:48:49.871 "dhchap_dhgroups": [ 00:48:49.871 "null", 00:48:49.871 "ffdhe2048", 00:48:49.871 "ffdhe3072", 00:48:49.872 "ffdhe4096", 00:48:49.872 "ffdhe6144", 00:48:49.872 "ffdhe8192" 00:48:49.872 ] 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "bdev_nvme_set_hotplug", 00:48:49.872 "params": { 00:48:49.872 "period_us": 100000, 00:48:49.872 "enable": false 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "bdev_malloc_create", 00:48:49.872 "params": { 00:48:49.872 "name": "malloc0", 00:48:49.872 "num_blocks": 8192, 00:48:49.872 "block_size": 4096, 00:48:49.872 "physical_block_size": 4096, 00:48:49.872 "uuid": "228b8029-0c2a-471d-9d67-7b6746977e4a", 00:48:49.872 "optimal_io_boundary": 0, 00:48:49.872 "md_size": 0, 00:48:49.872 "dif_type": 0, 00:48:49.872 "dif_is_head_of_md": false, 00:48:49.872 "dif_pi_format": 0 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "bdev_wait_for_examine" 00:48:49.872 } 00:48:49.872 ] 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "subsystem": "nbd", 00:48:49.872 "config": [] 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "subsystem": "scheduler", 00:48:49.872 "config": [ 00:48:49.872 { 00:48:49.872 "method": "framework_set_scheduler", 00:48:49.872 "params": { 00:48:49.872 "name": "static" 00:48:49.872 } 00:48:49.872 } 00:48:49.872 ] 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "subsystem": "nvmf", 00:48:49.872 "config": [ 00:48:49.872 { 00:48:49.872 "method": "nvmf_set_config", 00:48:49.872 "params": { 00:48:49.872 "discovery_filter": "match_any", 00:48:49.872 "admin_cmd_passthru": { 00:48:49.872 "identify_ctrlr": false 00:48:49.872 }, 00:48:49.872 "dhchap_digests": [ 00:48:49.872 "sha256", 00:48:49.872 "sha384", 00:48:49.872 "sha512" 00:48:49.872 ], 00:48:49.872 "dhchap_dhgroups": [ 00:48:49.872 "null", 00:48:49.872 "ffdhe2048", 00:48:49.872 "ffdhe3072", 00:48:49.872 "ffdhe4096", 00:48:49.872 "ffdhe6144", 00:48:49.872 "ffdhe8192" 00:48:49.872 ] 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "nvmf_set_max_subsystems", 00:48:49.872 "params": { 00:48:49.872 "max_subsystems": 1024 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "nvmf_set_crdt", 00:48:49.872 "params": { 00:48:49.872 "crdt1": 0, 00:48:49.872 "crdt2": 0, 00:48:49.872 "crdt3": 0 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "nvmf_create_transport", 00:48:49.872 "params": { 00:48:49.872 "trtype": "TCP", 00:48:49.872 "max_queue_depth": 128, 00:48:49.872 "max_io_qpairs_per_ctrlr": 127, 00:48:49.872 "in_capsule_data_size": 4096, 00:48:49.872 "max_io_size": 131072, 00:48:49.872 "io_unit_size": 131072, 00:48:49.872 "max_aq_depth": 128, 00:48:49.872 "num_shared_buffers": 511, 00:48:49.872 "buf_cache_size": 4294967295, 00:48:49.872 "dif_insert_or_strip": false, 00:48:49.872 "zcopy": false, 00:48:49.872 "c2h_success": false, 00:48:49.872 "sock_priority": 0, 00:48:49.872 "abort_timeout_sec": 1, 00:48:49.872 "ack_timeout": 0, 00:48:49.872 "data_wr_pool_size": 0 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "nvmf_create_subsystem", 00:48:49.872 "params": { 00:48:49.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:49.872 "allow_any_host": false, 00:48:49.872 "serial_number": "00000000000000000000", 00:48:49.872 "model_number": "SPDK bdev Controller", 00:48:49.872 "max_namespaces": 32, 00:48:49.872 "min_cntlid": 1, 00:48:49.872 "max_cntlid": 65519, 00:48:49.872 "ana_reporting": false 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "nvmf_subsystem_add_host", 00:48:49.872 "params": { 00:48:49.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:49.872 "host": "nqn.2016-06.io.spdk:host1", 00:48:49.872 "psk": "key0" 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "nvmf_subsystem_add_ns", 00:48:49.872 "params": { 00:48:49.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:49.872 "namespace": { 00:48:49.872 "nsid": 1, 00:48:49.872 "bdev_name": "malloc0", 00:48:49.872 "nguid": "228B80290C2A471D9D677B6746977E4A", 00:48:49.872 "uuid": "228b8029-0c2a-471d-9d67-7b6746977e4a", 00:48:49.872 "no_auto_visible": false 00:48:49.872 } 00:48:49.872 } 00:48:49.872 }, 00:48:49.872 { 00:48:49.872 "method": "nvmf_subsystem_add_listener", 00:48:49.872 "params": { 00:48:49.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:49.872 "listen_address": { 00:48:49.872 "trtype": "TCP", 00:48:49.872 "adrfam": "IPv4", 00:48:49.872 "traddr": "10.0.0.3", 00:48:49.872 "trsvcid": "4420" 00:48:49.872 }, 00:48:49.872 "secure_channel": false, 00:48:49.872 "sock_impl": "ssl" 00:48:49.872 } 00:48:49.872 } 00:48:49.872 ] 00:48:49.872 } 00:48:49.872 ] 00:48:49.872 }' 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72454 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72454 00:48:49.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72454 ']' 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:49.872 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:49.872 [2024-12-09 10:00:57.237227] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:49.872 [2024-12-09 10:00:57.237592] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:50.131 [2024-12-09 10:00:57.385019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:50.131 [2024-12-09 10:00:57.417280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:50.131 [2024-12-09 10:00:57.417519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:50.131 [2024-12-09 10:00:57.417657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:50.131 [2024-12-09 10:00:57.417816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:50.131 [2024-12-09 10:00:57.418027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:50.131 [2024-12-09 10:00:57.418401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:50.131 [2024-12-09 10:00:57.561892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:50.131 [2024-12-09 10:00:57.620789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:50.389 [2024-12-09 10:00:57.652738] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:50.389 [2024-12-09 10:00:57.653121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:50.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72486 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72486 /var/tmp/bdevperf.sock 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72486 ']' 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:50.963 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:48:50.963 "subsystems": [ 00:48:50.963 { 00:48:50.963 "subsystem": "keyring", 00:48:50.963 "config": [ 00:48:50.963 { 00:48:50.963 "method": "keyring_file_add_key", 00:48:50.963 "params": { 00:48:50.963 "name": "key0", 00:48:50.963 "path": "/tmp/tmp.pnDhFlvg6S" 00:48:50.963 } 00:48:50.963 } 00:48:50.963 ] 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "subsystem": "iobuf", 00:48:50.963 "config": [ 00:48:50.963 { 00:48:50.963 "method": "iobuf_set_options", 00:48:50.963 "params": { 00:48:50.963 "small_pool_count": 8192, 00:48:50.963 "large_pool_count": 1024, 00:48:50.963 "small_bufsize": 8192, 00:48:50.963 "large_bufsize": 135168, 00:48:50.963 "enable_numa": false 00:48:50.963 } 00:48:50.963 } 00:48:50.963 ] 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "subsystem": "sock", 00:48:50.963 "config": [ 00:48:50.963 { 00:48:50.963 "method": "sock_set_default_impl", 00:48:50.963 "params": { 00:48:50.963 "impl_name": "uring" 00:48:50.963 } 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "method": "sock_impl_set_options", 00:48:50.963 "params": { 00:48:50.963 "impl_name": "ssl", 00:48:50.963 "recv_buf_size": 4096, 00:48:50.963 "send_buf_size": 4096, 00:48:50.963 "enable_recv_pipe": true, 00:48:50.963 "enable_quickack": false, 00:48:50.963 "enable_placement_id": 0, 00:48:50.963 "enable_zerocopy_send_server": true, 00:48:50.963 "enable_zerocopy_send_client": false, 00:48:50.963 "zerocopy_threshold": 0, 00:48:50.963 "tls_version": 0, 00:48:50.963 "enable_ktls": false 00:48:50.963 } 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "method": "sock_impl_set_options", 00:48:50.963 "params": { 00:48:50.963 "impl_name": "posix", 00:48:50.963 "recv_buf_size": 2097152, 00:48:50.963 "send_buf_size": 2097152, 00:48:50.963 "enable_recv_pipe": true, 00:48:50.963 "enable_quickack": false, 00:48:50.963 "enable_placement_id": 0, 00:48:50.963 "enable_zerocopy_send_server": true, 00:48:50.963 "enable_zerocopy_send_client": false, 00:48:50.963 "zerocopy_threshold": 0, 00:48:50.963 "tls_version": 0, 00:48:50.963 "enable_ktls": false 00:48:50.963 } 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "method": "sock_impl_set_options", 00:48:50.963 "params": { 00:48:50.963 "impl_name": "uring", 00:48:50.963 "recv_buf_size": 2097152, 00:48:50.963 "send_buf_size": 2097152, 00:48:50.963 "enable_recv_pipe": true, 00:48:50.963 "enable_quickack": false, 00:48:50.963 "enable_placement_id": 0, 00:48:50.963 "enable_zerocopy_send_server": false, 00:48:50.963 "enable_zerocopy_send_client": false, 00:48:50.963 "zerocopy_threshold": 0, 00:48:50.963 "tls_version": 0, 00:48:50.963 "enable_ktls": false 00:48:50.963 } 00:48:50.963 } 00:48:50.963 ] 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "subsystem": "vmd", 00:48:50.963 "config": [] 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "subsystem": "accel", 00:48:50.963 "config": [ 00:48:50.963 { 00:48:50.963 "method": "accel_set_options", 00:48:50.963 "params": { 00:48:50.963 "small_cache_size": 128, 00:48:50.963 "large_cache_size": 16, 00:48:50.963 "task_count": 2048, 00:48:50.963 "sequence_count": 2048, 00:48:50.963 "buf_count": 2048 00:48:50.963 } 00:48:50.963 } 00:48:50.963 ] 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "subsystem": "bdev", 00:48:50.963 "config": [ 00:48:50.963 { 00:48:50.963 "method": "bdev_set_options", 00:48:50.963 "params": { 00:48:50.963 "bdev_io_pool_size": 65535, 00:48:50.963 "bdev_io_cache_size": 256, 00:48:50.963 "bdev_auto_examine": true, 00:48:50.963 "iobuf_small_cache_size": 128, 00:48:50.963 "iobuf_large_cache_size": 16 00:48:50.963 } 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "method": "bdev_raid_set_options", 00:48:50.963 "params": { 00:48:50.963 "process_window_size_kb": 1024, 00:48:50.963 "process_max_bandwidth_mb_sec": 0 00:48:50.963 } 00:48:50.963 }, 00:48:50.963 { 00:48:50.963 "method": "bdev_iscsi_set_options", 00:48:50.963 "params": { 00:48:50.963 "timeout_sec": 30 00:48:50.963 } 00:48:50.963 }, 00:48:50.963 { 00:48:50.964 "method": "bdev_nvme_set_options", 00:48:50.964 "params": { 00:48:50.964 "action_on_timeout": "none", 00:48:50.964 "timeout_us": 0, 00:48:50.964 "timeout_admin_us": 0, 00:48:50.964 "keep_alive_timeout_ms": 10000, 00:48:50.964 "arbitration_burst": 0, 00:48:50.964 "low_priority_weight": 0, 00:48:50.964 "medium_priority_weight": 0, 00:48:50.964 "high_priority_weight": 0, 00:48:50.964 "nvme_adminq_poll_period_us": 10000, 00:48:50.964 "nvme_ioq_poll_period_us": 0, 00:48:50.964 "io_queue_requests": 512, 00:48:50.964 "delay_cmd_submit": true, 00:48:50.964 "transport_retry_count": 4, 00:48:50.964 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:50.964 "bdev_retry_count": 3, 00:48:50.964 "transport_ack_timeout": 0, 00:48:50.964 "ctrlr_loss_timeout_sec": 0, 00:48:50.964 "reconnect_delay_sec": 0, 00:48:50.964 "fast_io_fail_timeout_sec": 0, 00:48:50.964 "disable_auto_failback": false, 00:48:50.964 "generate_uuids": false, 00:48:50.964 "transport_tos": 0, 00:48:50.964 "nvme_error_stat": false, 00:48:50.964 "rdma_srq_size": 0, 00:48:50.964 "io_path_stat": false, 00:48:50.964 "allow_accel_sequence": false, 00:48:50.964 "rdma_max_cq_size": 0, 00:48:50.964 "rdma_cm_event_timeout_ms": 0, 00:48:50.964 "dhchap_digests": [ 00:48:50.964 "sha256", 00:48:50.964 "sha384", 00:48:50.964 "sha512" 00:48:50.964 ], 00:48:50.964 "dhchap_dhgroups": [ 00:48:50.964 "null", 00:48:50.964 "ffdhe2048", 00:48:50.964 "ffdhe3072", 00:48:50.964 "ffdhe4096", 00:48:50.964 "ffdhe6144", 00:48:50.964 "ffdhe8192" 00:48:50.964 ] 00:48:50.964 } 00:48:50.964 }, 00:48:50.964 { 00:48:50.964 "method": "bdev_nvme_attach_controller", 00:48:50.964 "params": { 00:48:50.964 "name": "nvme0", 00:48:50.964 "trtype": "TCP", 00:48:50.964 "adrfam": "IPv4", 00:48:50.964 "traddr": "10.0.0.3", 00:48:50.964 "trsvcid": "4420", 00:48:50.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:50.964 "prchk_reftag": false, 00:48:50.964 "prchk_guard": false, 00:48:50.964 "ctrlr_loss_timeout_sec": 0, 00:48:50.964 "reconnect_delay_sec": 0, 00:48:50.964 "fast_io_fail_timeout_sec": 0, 00:48:50.964 "psk": "key0", 00:48:50.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:50.964 "hdgst": false, 00:48:50.964 "ddgst": false, 00:48:50.964 "multipath": "multipath" 00:48:50.964 } 00:48:50.964 }, 00:48:50.964 { 00:48:50.964 "method": "bdev_nvme_set_hotplug", 00:48:50.964 "params": { 00:48:50.964 "period_us": 100000, 00:48:50.964 "enable": false 00:48:50.964 } 00:48:50.964 }, 00:48:50.964 { 00:48:50.964 "method": "bdev_enable_histogram", 00:48:50.964 "params": { 00:48:50.964 "name": "nvme0n1", 00:48:50.964 "enable": true 00:48:50.964 } 00:48:50.964 }, 00:48:50.964 { 00:48:50.964 "method": "bdev_wait_for_examine" 00:48:50.964 } 00:48:50.964 ] 00:48:50.964 }, 00:48:50.964 { 00:48:50.964 "subsystem": "nbd", 00:48:50.964 "config": [] 00:48:50.964 } 00:48:50.964 ] 00:48:50.964 }' 00:48:50.964 [2024-12-09 10:00:58.333520] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:50.964 [2024-12-09 10:00:58.333780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72486 ] 00:48:51.222 [2024-12-09 10:00:58.482775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:51.222 [2024-12-09 10:00:58.517208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:51.222 [2024-12-09 10:00:58.629173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:51.222 [2024-12-09 10:00:58.662508] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:52.158 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:52.158 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:48:52.158 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:48:52.158 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:48:52.417 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:52.417 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:48:52.417 Running I/O for 1 seconds... 00:48:53.352 3840.00 IOPS, 15.00 MiB/s 00:48:53.352 Latency(us) 00:48:53.352 [2024-12-09T10:01:00.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:53.352 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:48:53.352 Verification LBA range: start 0x0 length 0x2000 00:48:53.352 nvme0n1 : 1.03 3863.51 15.09 0.00 0.00 32764.19 10783.65 24188.74 00:48:53.352 [2024-12-09T10:01:00.854Z] =================================================================================================================== 00:48:53.352 [2024-12-09T10:01:00.854Z] Total : 3863.51 15.09 0.00 0.00 32764.19 10783.65 24188.74 00:48:53.352 { 00:48:53.352 "results": [ 00:48:53.352 { 00:48:53.352 "job": "nvme0n1", 00:48:53.352 "core_mask": "0x2", 00:48:53.352 "workload": "verify", 00:48:53.352 "status": "finished", 00:48:53.352 "verify_range": { 00:48:53.352 "start": 0, 00:48:53.352 "length": 8192 00:48:53.352 }, 00:48:53.352 "queue_depth": 128, 00:48:53.352 "io_size": 4096, 00:48:53.352 "runtime": 1.027045, 00:48:53.352 "iops": 3863.5113359200423, 00:48:53.352 "mibps": 15.091841155937665, 00:48:53.352 "io_failed": 0, 00:48:53.352 "io_timeout": 0, 00:48:53.352 "avg_latency_us": 32764.193782991206, 00:48:53.352 "min_latency_us": 10783.65090909091, 00:48:53.352 "max_latency_us": 24188.741818181818 00:48:53.352 } 00:48:53.352 ], 00:48:53.352 "core_count": 1 00:48:53.352 } 00:48:53.352 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:48:53.352 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:48:53.611 nvmf_trace.0 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72486 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72486 ']' 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72486 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72486 00:48:53.611 killing process with pid 72486 00:48:53.611 Received shutdown signal, test time was about 1.000000 seconds 00:48:53.611 00:48:53.611 Latency(us) 00:48:53.611 [2024-12-09T10:01:01.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:53.611 [2024-12-09T10:01:01.113Z] =================================================================================================================== 00:48:53.611 [2024-12-09T10:01:01.113Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72486' 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72486 00:48:53.611 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72486 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:53.870 rmmod nvme_tcp 00:48:53.870 rmmod nvme_fabrics 00:48:53.870 rmmod nvme_keyring 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72454 ']' 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72454 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72454 ']' 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72454 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72454 00:48:53.870 killing process with pid 72454 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72454' 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72454 00:48:53.870 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72454 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:48:54.129 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.SUzUbsDggz /tmp/tmp.8BdsT2G27Q /tmp/tmp.pnDhFlvg6S 00:48:54.388 ************************************ 00:48:54.388 END TEST nvmf_tls 00:48:54.388 ************************************ 00:48:54.388 00:48:54.388 real 1m22.447s 00:48:54.388 user 2m16.365s 00:48:54.388 sys 0m25.454s 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:48:54.388 ************************************ 00:48:54.388 START TEST nvmf_fips 00:48:54.388 ************************************ 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:48:54.388 * Looking for test storage... 00:48:54.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:48:54.388 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:48:54.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:54.647 --rc genhtml_branch_coverage=1 00:48:54.647 --rc genhtml_function_coverage=1 00:48:54.647 --rc genhtml_legend=1 00:48:54.647 --rc geninfo_all_blocks=1 00:48:54.647 --rc geninfo_unexecuted_blocks=1 00:48:54.647 00:48:54.647 ' 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:48:54.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:54.647 --rc genhtml_branch_coverage=1 00:48:54.647 --rc genhtml_function_coverage=1 00:48:54.647 --rc genhtml_legend=1 00:48:54.647 --rc geninfo_all_blocks=1 00:48:54.647 --rc geninfo_unexecuted_blocks=1 00:48:54.647 00:48:54.647 ' 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:48:54.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:54.647 --rc genhtml_branch_coverage=1 00:48:54.647 --rc genhtml_function_coverage=1 00:48:54.647 --rc genhtml_legend=1 00:48:54.647 --rc geninfo_all_blocks=1 00:48:54.647 --rc geninfo_unexecuted_blocks=1 00:48:54.647 00:48:54.647 ' 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:48:54.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:54.647 --rc genhtml_branch_coverage=1 00:48:54.647 --rc genhtml_function_coverage=1 00:48:54.647 --rc genhtml_legend=1 00:48:54.647 --rc geninfo_all_blocks=1 00:48:54.647 --rc geninfo_unexecuted_blocks=1 00:48:54.647 00:48:54.647 ' 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:54.647 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:54.648 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:48:54.648 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:48:54.648 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:48:54.649 Error setting digest 00:48:54.649 40E2A39E567F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:48:54.649 40E2A39E567F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:54.649 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:48:54.907 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:54.907 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:48:54.907 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:54.907 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:54.907 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:54.907 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:48:54.908 Cannot find device "nvmf_init_br" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:48:54.908 Cannot find device "nvmf_init_br2" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:48:54.908 Cannot find device "nvmf_tgt_br" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:48:54.908 Cannot find device "nvmf_tgt_br2" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:48:54.908 Cannot find device "nvmf_init_br" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:48:54.908 Cannot find device "nvmf_init_br2" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:48:54.908 Cannot find device "nvmf_tgt_br" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:48:54.908 Cannot find device "nvmf_tgt_br2" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:48:54.908 Cannot find device "nvmf_br" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:48:54.908 Cannot find device "nvmf_init_if" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:48:54.908 Cannot find device "nvmf_init_if2" 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:54.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:54.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:54.908 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:48:55.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:55.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:48:55.167 00:48:55.167 --- 10.0.0.3 ping statistics --- 00:48:55.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:55.167 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:48:55.167 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:48:55.167 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:48:55.167 00:48:55.167 --- 10.0.0.4 ping statistics --- 00:48:55.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:55.167 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:55.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:55.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:48:55.167 00:48:55.167 --- 10.0.0.1 ping statistics --- 00:48:55.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:55.167 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:48:55.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:55.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:48:55.167 00:48:55.167 --- 10.0.0.2 ping statistics --- 00:48:55.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:55.167 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72818 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72818 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72818 ']' 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:55.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:55.167 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:48:55.167 [2024-12-09 10:01:02.636453] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:55.167 [2024-12-09 10:01:02.636565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:55.426 [2024-12-09 10:01:02.795179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:55.426 [2024-12-09 10:01:02.832276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:55.426 [2024-12-09 10:01:02.832341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:55.426 [2024-12-09 10:01:02.832356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:55.426 [2024-12-09 10:01:02.832367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:55.426 [2024-12-09 10:01:02.832376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:55.426 [2024-12-09 10:01:02.832736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:55.426 [2024-12-09 10:01:02.865713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:55.426 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:55.426 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:48:55.426 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:55.426 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:55.426 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.WA0 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.WA0 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.WA0 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.WA0 00:48:55.685 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:55.944 [2024-12-09 10:01:03.260231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:55.944 [2024-12-09 10:01:03.276168] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:55.944 [2024-12-09 10:01:03.276359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:55.944 malloc0 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72847 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72847 /var/tmp/bdevperf.sock 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72847 ']' 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:55.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:55.944 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:48:55.944 [2024-12-09 10:01:03.421002] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:48:55.944 [2024-12-09 10:01:03.421097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72847 ] 00:48:56.203 [2024-12-09 10:01:03.575367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:56.203 [2024-12-09 10:01:03.614497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:56.203 [2024-12-09 10:01:03.647514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:48:57.137 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:57.137 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:48:57.137 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.WA0 00:48:57.396 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:48:57.654 [2024-12-09 10:01:04.974021] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:57.654 TLSTESTn1 00:48:57.654 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:48:57.937 Running I/O for 10 seconds... 00:48:59.806 3935.00 IOPS, 15.37 MiB/s [2024-12-09T10:01:08.242Z] 3981.00 IOPS, 15.55 MiB/s [2024-12-09T10:01:09.248Z] 4011.00 IOPS, 15.67 MiB/s [2024-12-09T10:01:10.624Z] 4016.00 IOPS, 15.69 MiB/s [2024-12-09T10:01:11.560Z] 4023.40 IOPS, 15.72 MiB/s [2024-12-09T10:01:12.495Z] 4030.17 IOPS, 15.74 MiB/s [2024-12-09T10:01:13.430Z] 4018.14 IOPS, 15.70 MiB/s [2024-12-09T10:01:14.366Z] 4022.38 IOPS, 15.71 MiB/s [2024-12-09T10:01:15.299Z] 4023.00 IOPS, 15.71 MiB/s [2024-12-09T10:01:15.299Z] 4027.30 IOPS, 15.73 MiB/s 00:49:07.797 Latency(us) 00:49:07.797 [2024-12-09T10:01:15.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:07.797 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:49:07.797 Verification LBA range: start 0x0 length 0x2000 00:49:07.797 TLSTESTn1 : 10.02 4032.86 15.75 0.00 0.00 31678.97 6374.87 30980.65 00:49:07.797 [2024-12-09T10:01:15.299Z] =================================================================================================================== 00:49:07.797 [2024-12-09T10:01:15.299Z] Total : 4032.86 15.75 0.00 0.00 31678.97 6374.87 30980.65 00:49:07.797 { 00:49:07.797 "results": [ 00:49:07.797 { 00:49:07.797 "job": "TLSTESTn1", 00:49:07.797 "core_mask": "0x4", 00:49:07.797 "workload": "verify", 00:49:07.797 "status": "finished", 00:49:07.797 "verify_range": { 00:49:07.797 "start": 0, 00:49:07.797 "length": 8192 00:49:07.797 }, 00:49:07.797 "queue_depth": 128, 00:49:07.797 "io_size": 4096, 00:49:07.797 "runtime": 10.017447, 00:49:07.797 "iops": 4032.863862419237, 00:49:07.797 "mibps": 15.753374462575145, 00:49:07.797 "io_failed": 0, 00:49:07.797 "io_timeout": 0, 00:49:07.797 "avg_latency_us": 31678.966463706347, 00:49:07.797 "min_latency_us": 6374.865454545455, 00:49:07.797 "max_latency_us": 30980.654545454545 00:49:07.797 } 00:49:07.797 ], 00:49:07.797 "core_count": 1 00:49:07.797 } 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:49:07.797 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:49:07.797 nvmf_trace.0 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72847 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72847 ']' 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72847 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72847 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:49:08.055 killing process with pid 72847 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72847' 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72847 00:49:08.055 Received shutdown signal, test time was about 10.000000 seconds 00:49:08.055 00:49:08.055 Latency(us) 00:49:08.055 [2024-12-09T10:01:15.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:08.055 [2024-12-09T10:01:15.557Z] =================================================================================================================== 00:49:08.055 [2024-12-09T10:01:15.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72847 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:08.055 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:08.314 rmmod nvme_tcp 00:49:08.314 rmmod nvme_fabrics 00:49:08.314 rmmod nvme_keyring 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72818 ']' 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72818 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72818 ']' 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72818 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72818 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:08.314 killing process with pid 72818 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72818' 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72818 00:49:08.314 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72818 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:08.573 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:08.573 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:08.573 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:08.573 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:08.573 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:08.573 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:08.573 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:08.573 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.WA0 00:49:08.832 00:49:08.832 real 0m14.331s 00:49:08.832 user 0m20.464s 00:49:08.832 sys 0m5.557s 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:08.832 ************************************ 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:49:08.832 END TEST nvmf_fips 00:49:08.832 ************************************ 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:49:08.832 ************************************ 00:49:08.832 START TEST nvmf_control_msg_list 00:49:08.832 ************************************ 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:49:08.832 * Looking for test storage... 00:49:08.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:08.832 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:08.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:08.833 --rc genhtml_branch_coverage=1 00:49:08.833 --rc genhtml_function_coverage=1 00:49:08.833 --rc genhtml_legend=1 00:49:08.833 --rc geninfo_all_blocks=1 00:49:08.833 --rc geninfo_unexecuted_blocks=1 00:49:08.833 00:49:08.833 ' 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:08.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:08.833 --rc genhtml_branch_coverage=1 00:49:08.833 --rc genhtml_function_coverage=1 00:49:08.833 --rc genhtml_legend=1 00:49:08.833 --rc geninfo_all_blocks=1 00:49:08.833 --rc geninfo_unexecuted_blocks=1 00:49:08.833 00:49:08.833 ' 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:08.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:08.833 --rc genhtml_branch_coverage=1 00:49:08.833 --rc genhtml_function_coverage=1 00:49:08.833 --rc genhtml_legend=1 00:49:08.833 --rc geninfo_all_blocks=1 00:49:08.833 --rc geninfo_unexecuted_blocks=1 00:49:08.833 00:49:08.833 ' 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:08.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:08.833 --rc genhtml_branch_coverage=1 00:49:08.833 --rc genhtml_function_coverage=1 00:49:08.833 --rc genhtml_legend=1 00:49:08.833 --rc geninfo_all_blocks=1 00:49:08.833 --rc geninfo_unexecuted_blocks=1 00:49:08.833 00:49:08.833 ' 00:49:08.833 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:09.092 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:09.093 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:09.093 Cannot find device "nvmf_init_br" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:09.093 Cannot find device "nvmf_init_br2" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:09.093 Cannot find device "nvmf_tgt_br" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:09.093 Cannot find device "nvmf_tgt_br2" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:09.093 Cannot find device "nvmf_init_br" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:09.093 Cannot find device "nvmf_init_br2" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:09.093 Cannot find device "nvmf_tgt_br" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:09.093 Cannot find device "nvmf_tgt_br2" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:09.093 Cannot find device "nvmf_br" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:09.093 Cannot find device "nvmf_init_if" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:09.093 Cannot find device "nvmf_init_if2" 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:09.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:09.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:09.093 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:09.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:09.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:49:09.353 00:49:09.353 --- 10.0.0.3 ping statistics --- 00:49:09.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:09.353 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:09.353 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:09.353 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:49:09.353 00:49:09.353 --- 10.0.0.4 ping statistics --- 00:49:09.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:09.353 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:09.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:09.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:49:09.353 00:49:09.353 --- 10.0.0.1 ping statistics --- 00:49:09.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:09.353 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:49:09.353 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:09.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:09.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:49:09.354 00:49:09.354 --- 10.0.0.2 ping statistics --- 00:49:09.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:09.354 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73243 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73243 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73243 ']' 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:09.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:09.354 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:49:09.354 [2024-12-09 10:01:16.802453] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:09.354 [2024-12-09 10:01:16.802552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:09.614 [2024-12-09 10:01:16.959312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:09.614 [2024-12-09 10:01:17.000944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:09.614 [2024-12-09 10:01:17.001228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:09.614 [2024-12-09 10:01:17.001321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:09.614 [2024-12-09 10:01:17.001342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:09.614 [2024-12-09 10:01:17.001351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:09.614 [2024-12-09 10:01:17.001720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:09.614 [2024-12-09 10:01:17.034612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:09.614 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:09.614 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:49:09.614 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:09.614 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:09.614 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:49:09.873 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:09.873 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:49:09.874 [2024-12-09 10:01:17.137044] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:49:09.874 Malloc0 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:49:09.874 [2024-12-09 10:01:17.172568] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73262 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73263 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73264 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:49:09.874 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73262 00:49:09.874 [2024-12-09 10:01:17.371056] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:49:10.133 [2024-12-09 10:01:17.401178] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:49:10.133 [2024-12-09 10:01:17.401506] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:49:11.079 Initializing NVMe Controllers 00:49:11.079 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:49:11.079 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:49:11.079 Initialization complete. Launching workers. 00:49:11.079 ======================================================== 00:49:11.079 Latency(us) 00:49:11.079 Device Information : IOPS MiB/s Average min max 00:49:11.079 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3462.00 13.52 288.40 142.71 1220.54 00:49:11.079 ======================================================== 00:49:11.079 Total : 3462.00 13.52 288.40 142.71 1220.54 00:49:11.079 00:49:11.079 Initializing NVMe Controllers 00:49:11.079 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:49:11.079 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:49:11.079 Initialization complete. Launching workers. 00:49:11.079 ======================================================== 00:49:11.079 Latency(us) 00:49:11.079 Device Information : IOPS MiB/s Average min max 00:49:11.079 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3485.00 13.61 286.57 162.38 661.05 00:49:11.079 ======================================================== 00:49:11.079 Total : 3485.00 13.61 286.57 162.38 661.05 00:49:11.079 00:49:11.079 Initializing NVMe Controllers 00:49:11.079 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:49:11.079 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:49:11.079 Initialization complete. Launching workers. 00:49:11.079 ======================================================== 00:49:11.079 Latency(us) 00:49:11.079 Device Information : IOPS MiB/s Average min max 00:49:11.079 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3490.99 13.64 286.06 137.70 558.48 00:49:11.079 ======================================================== 00:49:11.079 Total : 3490.99 13.64 286.06 137.70 558.48 00:49:11.079 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73263 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73264 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:11.079 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:11.079 rmmod nvme_tcp 00:49:11.079 rmmod nvme_fabrics 00:49:11.079 rmmod nvme_keyring 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73243 ']' 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73243 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73243 ']' 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73243 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73243 00:49:11.374 killing process with pid 73243 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73243' 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73243 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73243 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:11.374 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:11.635 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:11.635 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:11.635 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:11.635 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:11.635 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:11.635 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:11.635 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:11.635 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:49:11.635 00:49:11.635 real 0m2.914s 00:49:11.635 user 0m5.077s 00:49:11.635 sys 0m1.285s 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:49:11.635 ************************************ 00:49:11.635 END TEST nvmf_control_msg_list 00:49:11.635 ************************************ 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:49:11.635 ************************************ 00:49:11.635 START TEST nvmf_wait_for_buf 00:49:11.635 ************************************ 00:49:11.635 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:49:11.896 * Looking for test storage... 00:49:11.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:11.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:11.896 --rc genhtml_branch_coverage=1 00:49:11.896 --rc genhtml_function_coverage=1 00:49:11.896 --rc genhtml_legend=1 00:49:11.896 --rc geninfo_all_blocks=1 00:49:11.896 --rc geninfo_unexecuted_blocks=1 00:49:11.896 00:49:11.896 ' 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:11.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:11.896 --rc genhtml_branch_coverage=1 00:49:11.896 --rc genhtml_function_coverage=1 00:49:11.896 --rc genhtml_legend=1 00:49:11.896 --rc geninfo_all_blocks=1 00:49:11.896 --rc geninfo_unexecuted_blocks=1 00:49:11.896 00:49:11.896 ' 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:11.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:11.896 --rc genhtml_branch_coverage=1 00:49:11.896 --rc genhtml_function_coverage=1 00:49:11.896 --rc genhtml_legend=1 00:49:11.896 --rc geninfo_all_blocks=1 00:49:11.896 --rc geninfo_unexecuted_blocks=1 00:49:11.896 00:49:11.896 ' 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:11.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:11.896 --rc genhtml_branch_coverage=1 00:49:11.896 --rc genhtml_function_coverage=1 00:49:11.896 --rc genhtml_legend=1 00:49:11.896 --rc geninfo_all_blocks=1 00:49:11.896 --rc geninfo_unexecuted_blocks=1 00:49:11.896 00:49:11.896 ' 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:11.896 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:11.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:11.897 Cannot find device "nvmf_init_br" 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:11.897 Cannot find device "nvmf_init_br2" 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:11.897 Cannot find device "nvmf_tgt_br" 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:11.897 Cannot find device "nvmf_tgt_br2" 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:11.897 Cannot find device "nvmf_init_br" 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:49:11.897 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:12.157 Cannot find device "nvmf_init_br2" 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:12.157 Cannot find device "nvmf_tgt_br" 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:12.157 Cannot find device "nvmf_tgt_br2" 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:12.157 Cannot find device "nvmf_br" 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:12.157 Cannot find device "nvmf_init_if" 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:12.157 Cannot find device "nvmf_init_if2" 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:12.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:12.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:12.157 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:12.417 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:12.417 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:49:12.417 00:49:12.417 --- 10.0.0.3 ping statistics --- 00:49:12.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:12.417 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:12.417 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:12.417 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:49:12.417 00:49:12.417 --- 10.0.0.4 ping statistics --- 00:49:12.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:12.417 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:12.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:12.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:49:12.417 00:49:12.417 --- 10.0.0.1 ping statistics --- 00:49:12.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:12.417 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:12.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:12.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:49:12.417 00:49:12.417 --- 10.0.0.2 ping statistics --- 00:49:12.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:12.417 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:12.417 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73513 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73513 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73513 ']' 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:12.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:12.418 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.418 [2024-12-09 10:01:19.820270] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:12.418 [2024-12-09 10:01:19.820370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:12.678 [2024-12-09 10:01:19.972566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:12.678 [2024-12-09 10:01:20.003376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:12.678 [2024-12-09 10:01:20.003441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:12.678 [2024-12-09 10:01:20.003454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:12.678 [2024-12-09 10:01:20.003463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:12.678 [2024-12-09 10:01:20.003471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:12.678 [2024-12-09 10:01:20.003800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.678 [2024-12-09 10:01:20.142123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.678 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.939 Malloc0 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.939 [2024-12-09 10:01:20.188620] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:12.939 [2024-12-09 10:01:20.216717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:12.939 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:49:12.939 [2024-12-09 10:01:20.423107] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:49:14.320 Initializing NVMe Controllers 00:49:14.320 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:49:14.320 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:49:14.320 Initialization complete. Launching workers. 00:49:14.320 ======================================================== 00:49:14.320 Latency(us) 00:49:14.320 Device Information : IOPS MiB/s Average min max 00:49:14.320 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 496.00 62.00 8104.67 4946.73 15989.53 00:49:14.320 ======================================================== 00:49:14.320 Total : 496.00 62.00 8104.67 4946.73 15989.53 00:49:14.320 00:49:14.320 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:49:14.320 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:14.320 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:49:14.320 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:14.320 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4712 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4712 -eq 0 ]] 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:14.585 rmmod nvme_tcp 00:49:14.585 rmmod nvme_fabrics 00:49:14.585 rmmod nvme_keyring 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73513 ']' 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73513 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73513 ']' 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73513 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73513 00:49:14.585 killing process with pid 73513 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73513' 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73513 00:49:14.585 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73513 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:14.843 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:49:15.102 00:49:15.102 real 0m3.308s 00:49:15.102 user 0m2.625s 00:49:15.102 sys 0m0.771s 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:15.102 ************************************ 00:49:15.102 END TEST nvmf_wait_for_buf 00:49:15.102 ************************************ 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:49:15.102 ************************************ 00:49:15.102 START TEST nvmf_nsid 00:49:15.102 ************************************ 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:49:15.102 * Looking for test storage... 00:49:15.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:49:15.102 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.362 --rc genhtml_branch_coverage=1 00:49:15.362 --rc genhtml_function_coverage=1 00:49:15.362 --rc genhtml_legend=1 00:49:15.362 --rc geninfo_all_blocks=1 00:49:15.362 --rc geninfo_unexecuted_blocks=1 00:49:15.362 00:49:15.362 ' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.362 --rc genhtml_branch_coverage=1 00:49:15.362 --rc genhtml_function_coverage=1 00:49:15.362 --rc genhtml_legend=1 00:49:15.362 --rc geninfo_all_blocks=1 00:49:15.362 --rc geninfo_unexecuted_blocks=1 00:49:15.362 00:49:15.362 ' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.362 --rc genhtml_branch_coverage=1 00:49:15.362 --rc genhtml_function_coverage=1 00:49:15.362 --rc genhtml_legend=1 00:49:15.362 --rc geninfo_all_blocks=1 00:49:15.362 --rc geninfo_unexecuted_blocks=1 00:49:15.362 00:49:15.362 ' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:15.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.362 --rc genhtml_branch_coverage=1 00:49:15.362 --rc genhtml_function_coverage=1 00:49:15.362 --rc genhtml_legend=1 00:49:15.362 --rc geninfo_all_blocks=1 00:49:15.362 --rc geninfo_unexecuted_blocks=1 00:49:15.362 00:49:15.362 ' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:15.362 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:15.362 Cannot find device "nvmf_init_br" 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:15.362 Cannot find device "nvmf_init_br2" 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:15.362 Cannot find device "nvmf_tgt_br" 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:15.362 Cannot find device "nvmf_tgt_br2" 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:15.362 Cannot find device "nvmf_init_br" 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:15.362 Cannot find device "nvmf_init_br2" 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:15.362 Cannot find device "nvmf_tgt_br" 00:49:15.362 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:15.363 Cannot find device "nvmf_tgt_br2" 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:15.363 Cannot find device "nvmf_br" 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:15.363 Cannot find device "nvmf_init_if" 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:15.363 Cannot find device "nvmf_init_if2" 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:15.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:15.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:15.363 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:15.621 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:15.621 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:15.621 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:49:15.621 00:49:15.621 --- 10.0.0.3 ping statistics --- 00:49:15.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:15.621 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:15.621 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:15.621 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:49:15.621 00:49:15.621 --- 10.0.0.4 ping statistics --- 00:49:15.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:15.621 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:15.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:15.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:49:15.621 00:49:15.621 --- 10.0.0.1 ping statistics --- 00:49:15.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:15.621 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:15.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:15.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:49:15.621 00:49:15.621 --- 10.0.0.2 ping statistics --- 00:49:15.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:15.621 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73769 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73769 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73769 ']' 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:15.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:15.621 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:15.880 [2024-12-09 10:01:23.177851] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:15.880 [2024-12-09 10:01:23.177946] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:15.880 [2024-12-09 10:01:23.331464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:15.880 [2024-12-09 10:01:23.362491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:15.880 [2024-12-09 10:01:23.362565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:15.880 [2024-12-09 10:01:23.362577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:15.880 [2024-12-09 10:01:23.362586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:15.880 [2024-12-09 10:01:23.362593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:15.880 [2024-12-09 10:01:23.362903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:16.138 [2024-12-09 10:01:23.391671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73795 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:49:16.138 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ecbcf621-8c98-4dbe-9af2-66a8c4247a10 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=f3ddb623-2c7f-437d-b841-789cd1598d80 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=be3ad43d-7119-406e-9966-8efac783b293 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:16.139 null0 00:49:16.139 null1 00:49:16.139 null2 00:49:16.139 [2024-12-09 10:01:23.535739] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:16.139 [2024-12-09 10:01:23.536327] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:16.139 [2024-12-09 10:01:23.536404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73795 ] 00:49:16.139 [2024-12-09 10:01:23.559884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73795 /var/tmp/tgt2.sock 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73795 ']' 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:16.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:16.139 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:16.398 [2024-12-09 10:01:23.690376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:16.398 [2024-12-09 10:01:23.748618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:16.398 [2024-12-09 10:01:23.799919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:16.657 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:16.657 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:49:16.657 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:49:16.916 [2024-12-09 10:01:24.381980] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:16.916 [2024-12-09 10:01:24.398080] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:49:17.174 nvme0n1 nvme0n2 00:49:17.174 nvme1n1 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid=eacce601-ab17-4447-87df-663b0f4f5455 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:49:17.174 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:49:18.110 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:49:18.110 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:49:18.110 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:49:18.110 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ecbcf621-8c98-4dbe-9af2-66a8c4247a10 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ecbcf6218c984dbe9af266a8c4247a10 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo ECBCF6218C984DBE9AF266A8C4247A10 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ ECBCF6218C984DBE9AF266A8C4247A10 == \E\C\B\C\F\6\2\1\8\C\9\8\4\D\B\E\9\A\F\2\6\6\A\8\C\4\2\4\7\A\1\0 ]] 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid f3ddb623-2c7f-437d-b841-789cd1598d80 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f3ddb6232c7f437db841789cd1598d80 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F3DDB6232C7F437DB841789CD1598D80 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ F3DDB6232C7F437DB841789CD1598D80 == \F\3\D\D\B\6\2\3\2\C\7\F\4\3\7\D\B\8\4\1\7\8\9\C\D\1\5\9\8\D\8\0 ]] 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid be3ad43d-7119-406e-9966-8efac783b293 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=be3ad43d7119406e99668efac783b293 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BE3AD43D7119406E99668EFAC783B293 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ BE3AD43D7119406E99668EFAC783B293 == \B\E\3\A\D\4\3\D\7\1\1\9\4\0\6\E\9\9\6\6\8\E\F\A\C\7\8\3\B\2\9\3 ]] 00:49:18.369 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:49:18.628 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:49:18.628 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:49:18.628 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73795 00:49:18.628 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73795 ']' 00:49:18.628 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73795 00:49:18.628 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:49:18.628 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:18.628 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73795 00:49:18.628 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:18.628 killing process with pid 73795 00:49:18.628 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:18.628 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73795' 00:49:18.628 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73795 00:49:18.628 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73795 00:49:18.889 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:49:18.889 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:18.889 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:49:18.889 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:18.889 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:49:18.889 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:18.889 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:18.889 rmmod nvme_tcp 00:49:18.889 rmmod nvme_fabrics 00:49:19.147 rmmod nvme_keyring 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73769 ']' 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73769 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73769 ']' 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73769 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73769 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:19.147 killing process with pid 73769 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73769' 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73769 00:49:19.147 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73769 00:49:19.148 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:19.148 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:19.148 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:19.148 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:49:19.148 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:49:19.148 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:49:19.406 00:49:19.406 real 0m4.436s 00:49:19.406 user 0m6.612s 00:49:19.406 sys 0m1.525s 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:19.406 ************************************ 00:49:19.406 END TEST nvmf_nsid 00:49:19.406 ************************************ 00:49:19.406 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:19.665 10:01:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:49:19.665 00:49:19.665 real 5m20.430s 00:49:19.665 user 11m23.516s 00:49:19.665 sys 1m6.541s 00:49:19.665 10:01:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:19.665 10:01:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:49:19.665 ************************************ 00:49:19.665 END TEST nvmf_target_extra 00:49:19.665 ************************************ 00:49:19.665 10:01:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:49:19.665 10:01:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:19.665 10:01:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:19.665 10:01:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:49:19.665 ************************************ 00:49:19.665 START TEST nvmf_host 00:49:19.665 ************************************ 00:49:19.665 10:01:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:49:19.665 * Looking for test storage... 00:49:19.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:19.665 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:19.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:19.925 --rc genhtml_branch_coverage=1 00:49:19.925 --rc genhtml_function_coverage=1 00:49:19.925 --rc genhtml_legend=1 00:49:19.925 --rc geninfo_all_blocks=1 00:49:19.925 --rc geninfo_unexecuted_blocks=1 00:49:19.925 00:49:19.925 ' 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:19.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:19.925 --rc genhtml_branch_coverage=1 00:49:19.925 --rc genhtml_function_coverage=1 00:49:19.925 --rc genhtml_legend=1 00:49:19.925 --rc geninfo_all_blocks=1 00:49:19.925 --rc geninfo_unexecuted_blocks=1 00:49:19.925 00:49:19.925 ' 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:19.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:19.925 --rc genhtml_branch_coverage=1 00:49:19.925 --rc genhtml_function_coverage=1 00:49:19.925 --rc genhtml_legend=1 00:49:19.925 --rc geninfo_all_blocks=1 00:49:19.925 --rc geninfo_unexecuted_blocks=1 00:49:19.925 00:49:19.925 ' 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:19.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:19.925 --rc genhtml_branch_coverage=1 00:49:19.925 --rc genhtml_function_coverage=1 00:49:19.925 --rc genhtml_legend=1 00:49:19.925 --rc geninfo_all_blocks=1 00:49:19.925 --rc geninfo_unexecuted_blocks=1 00:49:19.925 00:49:19.925 ' 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:19.925 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:19.926 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:19.926 ************************************ 00:49:19.926 START TEST nvmf_identify 00:49:19.926 ************************************ 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:49:19.926 * Looking for test storage... 00:49:19.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:19.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:19.926 --rc genhtml_branch_coverage=1 00:49:19.926 --rc genhtml_function_coverage=1 00:49:19.926 --rc genhtml_legend=1 00:49:19.926 --rc geninfo_all_blocks=1 00:49:19.926 --rc geninfo_unexecuted_blocks=1 00:49:19.926 00:49:19.926 ' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:19.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:19.926 --rc genhtml_branch_coverage=1 00:49:19.926 --rc genhtml_function_coverage=1 00:49:19.926 --rc genhtml_legend=1 00:49:19.926 --rc geninfo_all_blocks=1 00:49:19.926 --rc geninfo_unexecuted_blocks=1 00:49:19.926 00:49:19.926 ' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:19.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:19.926 --rc genhtml_branch_coverage=1 00:49:19.926 --rc genhtml_function_coverage=1 00:49:19.926 --rc genhtml_legend=1 00:49:19.926 --rc geninfo_all_blocks=1 00:49:19.926 --rc geninfo_unexecuted_blocks=1 00:49:19.926 00:49:19.926 ' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:19.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:19.926 --rc genhtml_branch_coverage=1 00:49:19.926 --rc genhtml_function_coverage=1 00:49:19.926 --rc genhtml_legend=1 00:49:19.926 --rc geninfo_all_blocks=1 00:49:19.926 --rc geninfo_unexecuted_blocks=1 00:49:19.926 00:49:19.926 ' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:19.926 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:19.927 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:19.927 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:20.186 Cannot find device "nvmf_init_br" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:20.186 Cannot find device "nvmf_init_br2" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:20.186 Cannot find device "nvmf_tgt_br" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:20.186 Cannot find device "nvmf_tgt_br2" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:20.186 Cannot find device "nvmf_init_br" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:20.186 Cannot find device "nvmf_init_br2" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:20.186 Cannot find device "nvmf_tgt_br" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:20.186 Cannot find device "nvmf_tgt_br2" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:20.186 Cannot find device "nvmf_br" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:20.186 Cannot find device "nvmf_init_if" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:20.186 Cannot find device "nvmf_init_if2" 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:20.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:20.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:20.186 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:20.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:20.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:49:20.445 00:49:20.445 --- 10.0.0.3 ping statistics --- 00:49:20.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:20.445 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:20.445 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:20.445 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:49:20.445 00:49:20.445 --- 10.0.0.4 ping statistics --- 00:49:20.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:20.445 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:20.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:20.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:49:20.445 00:49:20.445 --- 10.0.0.1 ping statistics --- 00:49:20.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:20.445 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:20.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:20.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:49:20.445 00:49:20.445 --- 10.0.0.2 ping statistics --- 00:49:20.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:20.445 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74148 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:49:20.445 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74148 00:49:20.446 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74148 ']' 00:49:20.446 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:20.446 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:20.446 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:20.446 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:20.446 10:01:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.446 [2024-12-09 10:01:27.903855] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:20.446 [2024-12-09 10:01:27.904149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:20.704 [2024-12-09 10:01:28.062084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:20.704 [2024-12-09 10:01:28.102737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:20.704 [2024-12-09 10:01:28.103072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:20.704 [2024-12-09 10:01:28.103349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:20.704 [2024-12-09 10:01:28.103520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:20.704 [2024-12-09 10:01:28.103655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:20.704 [2024-12-09 10:01:28.104677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:20.704 [2024-12-09 10:01:28.104820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:20.704 [2024-12-09 10:01:28.104942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:49:20.704 [2024-12-09 10:01:28.104947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:20.704 [2024-12-09 10:01:28.140983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:20.704 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:20.704 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:49:20.704 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:20.704 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.704 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.704 [2024-12-09 10:01:28.204643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.962 Malloc0 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.962 [2024-12-09 10:01:28.305883] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:20.962 [ 00:49:20.962 { 00:49:20.962 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:49:20.962 "subtype": "Discovery", 00:49:20.962 "listen_addresses": [ 00:49:20.962 { 00:49:20.962 "trtype": "TCP", 00:49:20.962 "adrfam": "IPv4", 00:49:20.962 "traddr": "10.0.0.3", 00:49:20.962 "trsvcid": "4420" 00:49:20.962 } 00:49:20.962 ], 00:49:20.962 "allow_any_host": true, 00:49:20.962 "hosts": [] 00:49:20.962 }, 00:49:20.962 { 00:49:20.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:49:20.962 "subtype": "NVMe", 00:49:20.962 "listen_addresses": [ 00:49:20.962 { 00:49:20.962 "trtype": "TCP", 00:49:20.962 "adrfam": "IPv4", 00:49:20.962 "traddr": "10.0.0.3", 00:49:20.962 "trsvcid": "4420" 00:49:20.962 } 00:49:20.962 ], 00:49:20.962 "allow_any_host": true, 00:49:20.962 "hosts": [], 00:49:20.962 "serial_number": "SPDK00000000000001", 00:49:20.962 "model_number": "SPDK bdev Controller", 00:49:20.962 "max_namespaces": 32, 00:49:20.962 "min_cntlid": 1, 00:49:20.962 "max_cntlid": 65519, 00:49:20.962 "namespaces": [ 00:49:20.962 { 00:49:20.962 "nsid": 1, 00:49:20.962 "bdev_name": "Malloc0", 00:49:20.962 "name": "Malloc0", 00:49:20.962 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:49:20.962 "eui64": "ABCDEF0123456789", 00:49:20.962 "uuid": "465fe57c-37c5-4108-a727-0f276f3bbb9b" 00:49:20.962 } 00:49:20.962 ] 00:49:20.962 } 00:49:20.962 ] 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.962 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:49:20.962 [2024-12-09 10:01:28.359070] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:20.962 [2024-12-09 10:01:28.359150] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74171 ] 00:49:21.224 [2024-12-09 10:01:28.528741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:49:21.224 [2024-12-09 10:01:28.528807] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:49:21.224 [2024-12-09 10:01:28.528815] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:49:21.224 [2024-12-09 10:01:28.528831] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:49:21.224 [2024-12-09 10:01:28.528845] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:49:21.224 [2024-12-09 10:01:28.529234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:49:21.224 [2024-12-09 10:01:28.529300] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa20750 0 00:49:21.224 [2024-12-09 10:01:28.535978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:49:21.224 [2024-12-09 10:01:28.536006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:49:21.224 [2024-12-09 10:01:28.536014] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:49:21.224 [2024-12-09 10:01:28.536018] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:49:21.224 [2024-12-09 10:01:28.536063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.536073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.536077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.224 [2024-12-09 10:01:28.536092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:49:21.224 [2024-12-09 10:01:28.536128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.224 [2024-12-09 10:01:28.543978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.224 [2024-12-09 10:01:28.544004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.224 [2024-12-09 10:01:28.544010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.224 [2024-12-09 10:01:28.544030] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:49:21.224 [2024-12-09 10:01:28.544039] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:49:21.224 [2024-12-09 10:01:28.544055] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:49:21.224 [2024-12-09 10:01:28.544075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.224 [2024-12-09 10:01:28.544095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.224 [2024-12-09 10:01:28.544129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.224 [2024-12-09 10:01:28.544228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.224 [2024-12-09 10:01:28.544241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.224 [2024-12-09 10:01:28.544248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.224 [2024-12-09 10:01:28.544266] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:49:21.224 [2024-12-09 10:01:28.544281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:49:21.224 [2024-12-09 10:01:28.544294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.224 [2024-12-09 10:01:28.544312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.224 [2024-12-09 10:01:28.544343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.224 [2024-12-09 10:01:28.544424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.224 [2024-12-09 10:01:28.544438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.224 [2024-12-09 10:01:28.544446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.224 [2024-12-09 10:01:28.544464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:49:21.224 [2024-12-09 10:01:28.544475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:49:21.224 [2024-12-09 10:01:28.544484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.224 [2024-12-09 10:01:28.544501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.224 [2024-12-09 10:01:28.544532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.224 [2024-12-09 10:01:28.544608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.224 [2024-12-09 10:01:28.544618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.224 [2024-12-09 10:01:28.544622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.224 [2024-12-09 10:01:28.544636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:49:21.224 [2024-12-09 10:01:28.544654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.224 [2024-12-09 10:01:28.544683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.224 [2024-12-09 10:01:28.544707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.224 [2024-12-09 10:01:28.544783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.224 [2024-12-09 10:01:28.544793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.224 [2024-12-09 10:01:28.544798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.224 [2024-12-09 10:01:28.544813] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:49:21.224 [2024-12-09 10:01:28.544823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:49:21.224 [2024-12-09 10:01:28.544838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:49:21.224 [2024-12-09 10:01:28.544946] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:49:21.224 [2024-12-09 10:01:28.544977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:49:21.224 [2024-12-09 10:01:28.544989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.544998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.224 [2024-12-09 10:01:28.545007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.224 [2024-12-09 10:01:28.545033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.224 [2024-12-09 10:01:28.545111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.224 [2024-12-09 10:01:28.545122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.224 [2024-12-09 10:01:28.545128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.545134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.224 [2024-12-09 10:01:28.545144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:49:21.224 [2024-12-09 10:01:28.545162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.545169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.545174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.224 [2024-12-09 10:01:28.545182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.224 [2024-12-09 10:01:28.545205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.224 [2024-12-09 10:01:28.545280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.224 [2024-12-09 10:01:28.545294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.224 [2024-12-09 10:01:28.545301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.224 [2024-12-09 10:01:28.545306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.224 [2024-12-09 10:01:28.545311] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:49:21.224 [2024-12-09 10:01:28.545318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:49:21.224 [2024-12-09 10:01:28.545327] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:49:21.225 [2024-12-09 10:01:28.545340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:49:21.225 [2024-12-09 10:01:28.545352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.545366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.225 [2024-12-09 10:01:28.545397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.225 [2024-12-09 10:01:28.545530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.225 [2024-12-09 10:01:28.545550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.225 [2024-12-09 10:01:28.545555] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545559] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa20750): datao=0, datal=4096, cccid=0 00:49:21.225 [2024-12-09 10:01:28.545565] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa84740) on tqpair(0xa20750): expected_datao=0, payload_size=4096 00:49:21.225 [2024-12-09 10:01:28.545570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545579] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545584] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.225 [2024-12-09 10:01:28.545601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.225 [2024-12-09 10:01:28.545604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.225 [2024-12-09 10:01:28.545618] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:49:21.225 [2024-12-09 10:01:28.545624] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:49:21.225 [2024-12-09 10:01:28.545629] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:49:21.225 [2024-12-09 10:01:28.545635] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:49:21.225 [2024-12-09 10:01:28.545648] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:49:21.225 [2024-12-09 10:01:28.545659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:49:21.225 [2024-12-09 10:01:28.545675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:49:21.225 [2024-12-09 10:01:28.545689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.545711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:49:21.225 [2024-12-09 10:01:28.545736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.225 [2024-12-09 10:01:28.545823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.225 [2024-12-09 10:01:28.545834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.225 [2024-12-09 10:01:28.545841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.225 [2024-12-09 10:01:28.545868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.545889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.225 [2024-12-09 10:01:28.545896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.545914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.225 [2024-12-09 10:01:28.545925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.545949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.225 [2024-12-09 10:01:28.545971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.545987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.545998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.225 [2024-12-09 10:01:28.546008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:49:21.225 [2024-12-09 10:01:28.546021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:49:21.225 [2024-12-09 10:01:28.546030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.546042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.225 [2024-12-09 10:01:28.546069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84740, cid 0, qid 0 00:49:21.225 [2024-12-09 10:01:28.546080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa848c0, cid 1, qid 0 00:49:21.225 [2024-12-09 10:01:28.546089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84a40, cid 2, qid 0 00:49:21.225 [2024-12-09 10:01:28.546097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.225 [2024-12-09 10:01:28.546105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84d40, cid 4, qid 0 00:49:21.225 [2024-12-09 10:01:28.546232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.225 [2024-12-09 10:01:28.546244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.225 [2024-12-09 10:01:28.546249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84d40) on tqpair=0xa20750 00:49:21.225 [2024-12-09 10:01:28.546269] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:49:21.225 [2024-12-09 10:01:28.546281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:49:21.225 [2024-12-09 10:01:28.546301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.546315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.225 [2024-12-09 10:01:28.546340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84d40, cid 4, qid 0 00:49:21.225 [2024-12-09 10:01:28.546438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.225 [2024-12-09 10:01:28.546451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.225 [2024-12-09 10:01:28.546456] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546460] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa20750): datao=0, datal=4096, cccid=4 00:49:21.225 [2024-12-09 10:01:28.546465] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa84d40) on tqpair(0xa20750): expected_datao=0, payload_size=4096 00:49:21.225 [2024-12-09 10:01:28.546470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546478] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546482] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.225 [2024-12-09 10:01:28.546507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.225 [2024-12-09 10:01:28.546514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84d40) on tqpair=0xa20750 00:49:21.225 [2024-12-09 10:01:28.546540] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:49:21.225 [2024-12-09 10:01:28.546602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.546629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.225 [2024-12-09 10:01:28.546638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa20750) 00:49:21.225 [2024-12-09 10:01:28.546654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.225 [2024-12-09 10:01:28.546687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84d40, cid 4, qid 0 00:49:21.225 [2024-12-09 10:01:28.546697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84ec0, cid 5, qid 0 00:49:21.225 [2024-12-09 10:01:28.546865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.225 [2024-12-09 10:01:28.546886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.225 [2024-12-09 10:01:28.546892] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546896] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa20750): datao=0, datal=1024, cccid=4 00:49:21.225 [2024-12-09 10:01:28.546901] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa84d40) on tqpair(0xa20750): expected_datao=0, payload_size=1024 00:49:21.225 [2024-12-09 10:01:28.546906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.225 [2024-12-09 10:01:28.546914] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.546918] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.546925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.226 [2024-12-09 10:01:28.546931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.226 [2024-12-09 10:01:28.546935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.546939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84ec0) on tqpair=0xa20750 00:49:21.226 [2024-12-09 10:01:28.546975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.226 [2024-12-09 10:01:28.546989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.226 [2024-12-09 10:01:28.546997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84d40) on tqpair=0xa20750 00:49:21.226 [2024-12-09 10:01:28.547024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa20750) 00:49:21.226 [2024-12-09 10:01:28.547040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.226 [2024-12-09 10:01:28.547073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84d40, cid 4, qid 0 00:49:21.226 [2024-12-09 10:01:28.547193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.226 [2024-12-09 10:01:28.547210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.226 [2024-12-09 10:01:28.547215] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547219] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa20750): datao=0, datal=3072, cccid=4 00:49:21.226 [2024-12-09 10:01:28.547224] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa84d40) on tqpair(0xa20750): expected_datao=0, payload_size=3072 00:49:21.226 [2024-12-09 10:01:28.547229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547237] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547244] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.226 [2024-12-09 10:01:28.547269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.226 [2024-12-09 10:01:28.547275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84d40) on tqpair=0xa20750 00:49:21.226 [2024-12-09 10:01:28.547300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa20750) 00:49:21.226 [2024-12-09 10:01:28.547323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.226 [2024-12-09 10:01:28.547358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84d40, cid 4, qid 0 00:49:21.226 [2024-12-09 10:01:28.547470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.226 [2024-12-09 10:01:28.547486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.226 [2024-12-09 10:01:28.547491] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547495] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa20750): datao=0, datal=8, cccid=4 00:49:21.226 [2024-12-09 10:01:28.547500] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa84d40) on tqpair(0xa20750): expected_datao=0, payload_size=8 00:49:21.226 [2024-12-09 10:01:28.547505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547513] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547517] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.226 [2024-12-09 10:01:28.547554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.226 [2024-12-09 10:01:28.547561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.226 [2024-12-09 10:01:28.547568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84d40) on tqpair=0xa20750 00:49:21.226 ===================================================== 00:49:21.226 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:49:21.226 ===================================================== 00:49:21.226 Controller Capabilities/Features 00:49:21.226 ================================ 00:49:21.226 Vendor ID: 0000 00:49:21.226 Subsystem Vendor ID: 0000 00:49:21.226 Serial Number: .................... 00:49:21.226 Model Number: ........................................ 00:49:21.226 Firmware Version: 25.01 00:49:21.226 Recommended Arb Burst: 0 00:49:21.226 IEEE OUI Identifier: 00 00 00 00:49:21.226 Multi-path I/O 00:49:21.226 May have multiple subsystem ports: No 00:49:21.226 May have multiple controllers: No 00:49:21.226 Associated with SR-IOV VF: No 00:49:21.226 Max Data Transfer Size: 131072 00:49:21.226 Max Number of Namespaces: 0 00:49:21.226 Max Number of I/O Queues: 1024 00:49:21.226 NVMe Specification Version (VS): 1.3 00:49:21.226 NVMe Specification Version (Identify): 1.3 00:49:21.226 Maximum Queue Entries: 128 00:49:21.226 Contiguous Queues Required: Yes 00:49:21.226 Arbitration Mechanisms Supported 00:49:21.226 Weighted Round Robin: Not Supported 00:49:21.226 Vendor Specific: Not Supported 00:49:21.226 Reset Timeout: 15000 ms 00:49:21.226 Doorbell Stride: 4 bytes 00:49:21.226 NVM Subsystem Reset: Not Supported 00:49:21.226 Command Sets Supported 00:49:21.226 NVM Command Set: Supported 00:49:21.226 Boot Partition: Not Supported 00:49:21.226 Memory Page Size Minimum: 4096 bytes 00:49:21.226 Memory Page Size Maximum: 4096 bytes 00:49:21.226 Persistent Memory Region: Not Supported 00:49:21.226 Optional Asynchronous Events Supported 00:49:21.226 Namespace Attribute Notices: Not Supported 00:49:21.226 Firmware Activation Notices: Not Supported 00:49:21.226 ANA Change Notices: Not Supported 00:49:21.226 PLE Aggregate Log Change Notices: Not Supported 00:49:21.226 LBA Status Info Alert Notices: Not Supported 00:49:21.226 EGE Aggregate Log Change Notices: Not Supported 00:49:21.226 Normal NVM Subsystem Shutdown event: Not Supported 00:49:21.226 Zone Descriptor Change Notices: Not Supported 00:49:21.226 Discovery Log Change Notices: Supported 00:49:21.226 Controller Attributes 00:49:21.226 128-bit Host Identifier: Not Supported 00:49:21.226 Non-Operational Permissive Mode: Not Supported 00:49:21.226 NVM Sets: Not Supported 00:49:21.226 Read Recovery Levels: Not Supported 00:49:21.226 Endurance Groups: Not Supported 00:49:21.226 Predictable Latency Mode: Not Supported 00:49:21.226 Traffic Based Keep ALive: Not Supported 00:49:21.226 Namespace Granularity: Not Supported 00:49:21.226 SQ Associations: Not Supported 00:49:21.226 UUID List: Not Supported 00:49:21.226 Multi-Domain Subsystem: Not Supported 00:49:21.226 Fixed Capacity Management: Not Supported 00:49:21.226 Variable Capacity Management: Not Supported 00:49:21.226 Delete Endurance Group: Not Supported 00:49:21.226 Delete NVM Set: Not Supported 00:49:21.226 Extended LBA Formats Supported: Not Supported 00:49:21.226 Flexible Data Placement Supported: Not Supported 00:49:21.226 00:49:21.226 Controller Memory Buffer Support 00:49:21.226 ================================ 00:49:21.226 Supported: No 00:49:21.226 00:49:21.226 Persistent Memory Region Support 00:49:21.226 ================================ 00:49:21.226 Supported: No 00:49:21.226 00:49:21.226 Admin Command Set Attributes 00:49:21.226 ============================ 00:49:21.226 Security Send/Receive: Not Supported 00:49:21.226 Format NVM: Not Supported 00:49:21.226 Firmware Activate/Download: Not Supported 00:49:21.226 Namespace Management: Not Supported 00:49:21.226 Device Self-Test: Not Supported 00:49:21.226 Directives: Not Supported 00:49:21.226 NVMe-MI: Not Supported 00:49:21.226 Virtualization Management: Not Supported 00:49:21.226 Doorbell Buffer Config: Not Supported 00:49:21.226 Get LBA Status Capability: Not Supported 00:49:21.226 Command & Feature Lockdown Capability: Not Supported 00:49:21.226 Abort Command Limit: 1 00:49:21.226 Async Event Request Limit: 4 00:49:21.226 Number of Firmware Slots: N/A 00:49:21.226 Firmware Slot 1 Read-Only: N/A 00:49:21.226 Firmware Activation Without Reset: N/A 00:49:21.226 Multiple Update Detection Support: N/A 00:49:21.226 Firmware Update Granularity: No Information Provided 00:49:21.226 Per-Namespace SMART Log: No 00:49:21.226 Asymmetric Namespace Access Log Page: Not Supported 00:49:21.226 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:49:21.226 Command Effects Log Page: Not Supported 00:49:21.226 Get Log Page Extended Data: Supported 00:49:21.226 Telemetry Log Pages: Not Supported 00:49:21.226 Persistent Event Log Pages: Not Supported 00:49:21.226 Supported Log Pages Log Page: May Support 00:49:21.226 Commands Supported & Effects Log Page: Not Supported 00:49:21.226 Feature Identifiers & Effects Log Page:May Support 00:49:21.226 NVMe-MI Commands & Effects Log Page: May Support 00:49:21.226 Data Area 4 for Telemetry Log: Not Supported 00:49:21.226 Error Log Page Entries Supported: 128 00:49:21.226 Keep Alive: Not Supported 00:49:21.226 00:49:21.226 NVM Command Set Attributes 00:49:21.226 ========================== 00:49:21.226 Submission Queue Entry Size 00:49:21.227 Max: 1 00:49:21.227 Min: 1 00:49:21.227 Completion Queue Entry Size 00:49:21.227 Max: 1 00:49:21.227 Min: 1 00:49:21.227 Number of Namespaces: 0 00:49:21.227 Compare Command: Not Supported 00:49:21.227 Write Uncorrectable Command: Not Supported 00:49:21.227 Dataset Management Command: Not Supported 00:49:21.227 Write Zeroes Command: Not Supported 00:49:21.227 Set Features Save Field: Not Supported 00:49:21.227 Reservations: Not Supported 00:49:21.227 Timestamp: Not Supported 00:49:21.227 Copy: Not Supported 00:49:21.227 Volatile Write Cache: Not Present 00:49:21.227 Atomic Write Unit (Normal): 1 00:49:21.227 Atomic Write Unit (PFail): 1 00:49:21.227 Atomic Compare & Write Unit: 1 00:49:21.227 Fused Compare & Write: Supported 00:49:21.227 Scatter-Gather List 00:49:21.227 SGL Command Set: Supported 00:49:21.227 SGL Keyed: Supported 00:49:21.227 SGL Bit Bucket Descriptor: Not Supported 00:49:21.227 SGL Metadata Pointer: Not Supported 00:49:21.227 Oversized SGL: Not Supported 00:49:21.227 SGL Metadata Address: Not Supported 00:49:21.227 SGL Offset: Supported 00:49:21.227 Transport SGL Data Block: Not Supported 00:49:21.227 Replay Protected Memory Block: Not Supported 00:49:21.227 00:49:21.227 Firmware Slot Information 00:49:21.227 ========================= 00:49:21.227 Active slot: 0 00:49:21.227 00:49:21.227 00:49:21.227 Error Log 00:49:21.227 ========= 00:49:21.227 00:49:21.227 Active Namespaces 00:49:21.227 ================= 00:49:21.227 Discovery Log Page 00:49:21.227 ================== 00:49:21.227 Generation Counter: 2 00:49:21.227 Number of Records: 2 00:49:21.227 Record Format: 0 00:49:21.227 00:49:21.227 Discovery Log Entry 0 00:49:21.227 ---------------------- 00:49:21.227 Transport Type: 3 (TCP) 00:49:21.227 Address Family: 1 (IPv4) 00:49:21.227 Subsystem Type: 3 (Current Discovery Subsystem) 00:49:21.227 Entry Flags: 00:49:21.227 Duplicate Returned Information: 1 00:49:21.227 Explicit Persistent Connection Support for Discovery: 1 00:49:21.227 Transport Requirements: 00:49:21.227 Secure Channel: Not Required 00:49:21.227 Port ID: 0 (0x0000) 00:49:21.227 Controller ID: 65535 (0xffff) 00:49:21.227 Admin Max SQ Size: 128 00:49:21.227 Transport Service Identifier: 4420 00:49:21.227 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:49:21.227 Transport Address: 10.0.0.3 00:49:21.227 Discovery Log Entry 1 00:49:21.227 ---------------------- 00:49:21.227 Transport Type: 3 (TCP) 00:49:21.227 Address Family: 1 (IPv4) 00:49:21.227 Subsystem Type: 2 (NVM Subsystem) 00:49:21.227 Entry Flags: 00:49:21.227 Duplicate Returned Information: 0 00:49:21.227 Explicit Persistent Connection Support for Discovery: 0 00:49:21.227 Transport Requirements: 00:49:21.227 Secure Channel: Not Required 00:49:21.227 Port ID: 0 (0x0000) 00:49:21.227 Controller ID: 65535 (0xffff) 00:49:21.227 Admin Max SQ Size: 128 00:49:21.227 Transport Service Identifier: 4420 00:49:21.227 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:49:21.227 Transport Address: 10.0.0.3 [2024-12-09 10:01:28.547678] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:49:21.227 [2024-12-09 10:01:28.547698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84740) on tqpair=0xa20750 00:49:21.227 [2024-12-09 10:01:28.547710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.227 [2024-12-09 10:01:28.547720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa848c0) on tqpair=0xa20750 00:49:21.227 [2024-12-09 10:01:28.547729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.227 [2024-12-09 10:01:28.547738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84a40) on tqpair=0xa20750 00:49:21.227 [2024-12-09 10:01:28.547743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.227 [2024-12-09 10:01:28.547750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.227 [2024-12-09 10:01:28.547759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.227 [2024-12-09 10:01:28.547775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.547783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.547790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.227 [2024-12-09 10:01:28.547803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.227 [2024-12-09 10:01:28.547835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.227 [2024-12-09 10:01:28.547914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.227 [2024-12-09 10:01:28.547928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.227 [2024-12-09 10:01:28.547936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.547943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.227 [2024-12-09 10:01:28.551973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.551997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.227 [2024-12-09 10:01:28.552014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.227 [2024-12-09 10:01:28.552063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.227 [2024-12-09 10:01:28.552174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.227 [2024-12-09 10:01:28.552201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.227 [2024-12-09 10:01:28.552206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.227 [2024-12-09 10:01:28.552217] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:49:21.227 [2024-12-09 10:01:28.552225] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:49:21.227 [2024-12-09 10:01:28.552244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.227 [2024-12-09 10:01:28.552270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.227 [2024-12-09 10:01:28.552295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.227 [2024-12-09 10:01:28.552381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.227 [2024-12-09 10:01:28.552392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.227 [2024-12-09 10:01:28.552399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.227 [2024-12-09 10:01:28.552425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.227 [2024-12-09 10:01:28.552448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.227 [2024-12-09 10:01:28.552472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.227 [2024-12-09 10:01:28.552546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.227 [2024-12-09 10:01:28.552567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.227 [2024-12-09 10:01:28.552576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.227 [2024-12-09 10:01:28.552595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.227 [2024-12-09 10:01:28.552613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.227 [2024-12-09 10:01:28.552636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.227 [2024-12-09 10:01:28.552711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.227 [2024-12-09 10:01:28.552725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.227 [2024-12-09 10:01:28.552731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.227 [2024-12-09 10:01:28.552736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.227 [2024-12-09 10:01:28.552749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.552754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.552758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.552766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.552789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.552864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.552875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.228 [2024-12-09 10:01:28.552880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.552884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.228 [2024-12-09 10:01:28.552896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.552901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.552905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.552913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.552935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.553027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.553048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.228 [2024-12-09 10:01:28.553053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.228 [2024-12-09 10:01:28.553071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.553089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.553118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.553189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.553199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.228 [2024-12-09 10:01:28.553203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.228 [2024-12-09 10:01:28.553220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.553237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.553259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.553340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.553356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.228 [2024-12-09 10:01:28.553361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.228 [2024-12-09 10:01:28.553383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.553412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.553437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.553515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.553529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.228 [2024-12-09 10:01:28.553533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.228 [2024-12-09 10:01:28.553550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.553570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.553601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.553672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.553682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.228 [2024-12-09 10:01:28.553686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.228 [2024-12-09 10:01:28.553711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.553739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.553768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.553842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.553852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.228 [2024-12-09 10:01:28.553856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.228 [2024-12-09 10:01:28.553877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.553893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.553905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.553929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.554029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.554055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.228 [2024-12-09 10:01:28.554066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.554073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.228 [2024-12-09 10:01:28.554090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.554096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.554100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.554108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.554133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.554213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.554224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.228 [2024-12-09 10:01:28.554228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.554232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.228 [2024-12-09 10:01:28.554245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.554253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.228 [2024-12-09 10:01:28.554259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.228 [2024-12-09 10:01:28.554272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.228 [2024-12-09 10:01:28.554301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.228 [2024-12-09 10:01:28.554374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.228 [2024-12-09 10:01:28.554391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.554395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.554412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.554431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.554463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.554537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.554553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.554558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.554575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.554603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.554632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.554698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.554714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.554718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.554741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.554771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.554800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.554871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.554898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.554908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.554931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.554941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.554949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.554986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.555069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.555081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.555086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.555103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.555121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.555142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.555222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.555238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.555242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.555259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.555277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.555301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.555375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.555394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.555399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.555417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.555434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.555457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.555532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.555547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.555551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.555568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.555585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.555613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.555687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.555703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.555708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.555724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.555742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.555763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.555840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.555855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.555860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.555877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.555891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.555903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.555936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.559978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.560003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.560009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.560014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.560030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.560036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.560040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa20750) 00:49:21.229 [2024-12-09 10:01:28.560061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.229 [2024-12-09 10:01:28.560100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa84bc0, cid 3, qid 0 00:49:21.229 [2024-12-09 10:01:28.560182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.229 [2024-12-09 10:01:28.560194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.229 [2024-12-09 10:01:28.560198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.229 [2024-12-09 10:01:28.560202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa84bc0) on tqpair=0xa20750 00:49:21.229 [2024-12-09 10:01:28.560212] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:49:21.230 00:49:21.230 10:01:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:49:21.230 [2024-12-09 10:01:28.698796] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:21.230 [2024-12-09 10:01:28.698847] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74179 ] 00:49:21.502 [2024-12-09 10:01:28.866645] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:49:21.502 [2024-12-09 10:01:28.866705] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:49:21.502 [2024-12-09 10:01:28.866712] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:49:21.502 [2024-12-09 10:01:28.866727] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:49:21.502 [2024-12-09 10:01:28.866741] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:49:21.502 [2024-12-09 10:01:28.867049] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:49:21.502 [2024-12-09 10:01:28.867106] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11a3750 0 00:49:21.502 [2024-12-09 10:01:28.873980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:49:21.502 [2024-12-09 10:01:28.874005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:49:21.502 [2024-12-09 10:01:28.874012] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:49:21.502 [2024-12-09 10:01:28.874016] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:49:21.502 [2024-12-09 10:01:28.874049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.502 [2024-12-09 10:01:28.874058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.502 [2024-12-09 10:01:28.874062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.502 [2024-12-09 10:01:28.874077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:49:21.502 [2024-12-09 10:01:28.874110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.502 [2024-12-09 10:01:28.881983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.502 [2024-12-09 10:01:28.882006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.502 [2024-12-09 10:01:28.882011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.502 [2024-12-09 10:01:28.882017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.502 [2024-12-09 10:01:28.882028] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:49:21.502 [2024-12-09 10:01:28.882037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:49:21.502 [2024-12-09 10:01:28.882044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:49:21.502 [2024-12-09 10:01:28.882063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.502 [2024-12-09 10:01:28.882069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.502 [2024-12-09 10:01:28.882074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.502 [2024-12-09 10:01:28.882084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.502 [2024-12-09 10:01:28.882115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.502 [2024-12-09 10:01:28.882171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.502 [2024-12-09 10:01:28.882179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.502 [2024-12-09 10:01:28.882183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.502 [2024-12-09 10:01:28.882187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.502 [2024-12-09 10:01:28.882194] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:49:21.502 [2024-12-09 10:01:28.882202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:49:21.502 [2024-12-09 10:01:28.882211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.502 [2024-12-09 10:01:28.882216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.502 [2024-12-09 10:01:28.882220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.502 [2024-12-09 10:01:28.882228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.502 [2024-12-09 10:01:28.882248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.502 [2024-12-09 10:01:28.882298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.502 [2024-12-09 10:01:28.882305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.502 [2024-12-09 10:01:28.882309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.503 [2024-12-09 10:01:28.882313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.503 [2024-12-09 10:01:28.882320] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:49:21.503 [2024-12-09 10:01:28.882330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:49:21.503 [2024-12-09 10:01:28.882337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.503 [2024-12-09 10:01:28.882342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.503 [2024-12-09 10:01:28.882346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.503 [2024-12-09 10:01:28.882354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.503 [2024-12-09 10:01:28.882373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.503 [2024-12-09 10:01:28.882415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.503 [2024-12-09 10:01:28.882422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.503 [2024-12-09 10:01:28.882426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.503 [2024-12-09 10:01:28.882430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.503 [2024-12-09 10:01:28.882436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:49:21.503 [2024-12-09 10:01:28.882447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.503 [2024-12-09 10:01:28.882452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.503 [2024-12-09 10:01:28.882456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.503 [2024-12-09 10:01:28.882464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.503 [2024-12-09 10:01:28.882482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.503 [2024-12-09 10:01:28.882533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.503 [2024-12-09 10:01:28.882540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.503 [2024-12-09 10:01:28.882544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.503 [2024-12-09 10:01:28.882548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.503 [2024-12-09 10:01:28.882554] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:49:21.503 [2024-12-09 10:01:28.882560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:49:21.503 [2024-12-09 10:01:28.882569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:49:21.503 ===================================================== 00:49:21.503 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:49:21.503 ===================================================== 00:49:21.503 Controller Capabilities/Features 00:49:21.503 ================================ 00:49:21.503 Vendor ID: 8086 00:49:21.503 Subsystem Vendor ID: 8086 00:49:21.503 Serial Number: SPDK00000000000001 00:49:21.503 Model Number: SPDK bdev Controller 00:49:21.503 Firmware Version: 25.01 00:49:21.503 Recommended Arb Burst: 6 00:49:21.503 IEEE OUI Identifier: e4 d2 5c 00:49:21.503 Multi-path I/O 00:49:21.503 May have multiple subsystem ports: Yes 00:49:21.503 May have multiple controllers: Yes 00:49:21.503 Associated with SR-IOV VF: No 00:49:21.503 Max Data Transfer Size: 131072 00:49:21.503 Max Number of Namespaces: 32 00:49:21.503 Max Number of I/O Queues: 127 00:49:21.503 NVMe Specification Version (VS): 1.3 00:49:21.503 NVMe Specification Version (Identify): 1.3 00:49:21.503 Maximum Queue Entries: 128 00:49:21.503 Contiguous Queues Required: Yes 00:49:21.503 Arbitration Mechanisms Supported 00:49:21.503 Weighted Round Robin: Not Supported 00:49:21.503 Vendor Specific: Not Supported 00:49:21.503 Reset Timeout: 15000 ms 00:49:21.503 Doorbell Stride: 4 bytes 00:49:21.503 NVM Subsystem Reset: Not Supported 00:49:21.503 Command Sets Supported 00:49:21.503 NVM Command Set: Supported 00:49:21.503 Boot Partition: Not Supported 00:49:21.503 Memory Page Size Minimum: 4096 bytes 00:49:21.503 Memory Page Size Maximum: 4096 bytes 00:49:21.503 Persistent Memory Region: Not Supported 00:49:21.503 Optional Asynchronous Events Supported 00:49:21.503 Namespace Attribute Notices: Supported 00:49:21.503 Firmware Activation Notices: Not Supported 00:49:21.503 ANA Change Notices: Not Supported 00:49:21.503 PLE Aggregate Log Change Notices: Not Supported 00:49:21.503 LBA Status Info Alert Notices: Not Supported 00:49:21.503 EGE Aggregate Log Change Notices: Not Supported 00:49:21.503 Normal NVM Subsystem Shutdown event: Not Supported 00:49:21.503 Zone Descriptor Change Notices: Not Supported 00:49:21.503 Discovery Log Change Notices: Not Supported 00:49:21.503 Controller Attributes 00:49:21.503 128-bit Host Identifier: Supported 00:49:21.503 Non-Operational Permissive Mode: Not Supported 00:49:21.503 NVM Sets: Not Supported 00:49:21.503 Read Recovery Levels: Not Supported 00:49:21.503 Endurance Groups: Not Supported 00:49:21.503 Predictable Latency Mode: Not Supported 00:49:21.503 Traffic Based Keep ALive: Not Supported 00:49:21.503 Namespace Granularity: Not Supported 00:49:21.503 SQ Associations: Not Supported 00:49:21.503 UUID List: Not Supported 00:49:21.503 Multi-Domain Subsystem: Not Supported 00:49:21.503 Fixed Capacity Management: Not Supported 00:49:21.503 Variable Capacity Management: Not Supported 00:49:21.503 Delete Endurance Group: Not Supported 00:49:21.503 Delete NVM Set: Not Supported 00:49:21.503 Extended LBA Formats Supported: Not Supported 00:49:21.503 Flexible Data Placement Supported: Not Supported 00:49:21.503 00:49:21.503 Controller Memory Buffer Support 00:49:21.503 ================================ 00:49:21.503 Supported: No 00:49:21.503 00:49:21.503 Persistent Memory Region Support 00:49:21.503 ================================ 00:49:21.503 Supported: No 00:49:21.503 00:49:21.503 Admin Command Set Attributes 00:49:21.503 ============================ 00:49:21.503 Security Send/Receive: Not Supported 00:49:21.503 Format NVM: Not Supported 00:49:21.503 Firmware Activate/Download: Not Supported 00:49:21.503 Namespace Management: Not Supported 00:49:21.503 Device Self-Test: Not Supported 00:49:21.503 Directives: Not Supported 00:49:21.503 NVMe-MI: Not Supported 00:49:21.503 Virtualization Management: Not Supported 00:49:21.503 Doorbell Buffer Config: Not Supported 00:49:21.503 Get LBA Status Capability: Not Supported 00:49:21.503 Command & Feature Lockdown Capability: Not Supported 00:49:21.503 Abort Command Limit: 4 00:49:21.503 Async Event Request Limit: 4 00:49:21.503 Number of Firmware Slots: N/A 00:49:21.503 Firmware Slot 1 Read-Only: N/A 00:49:21.503 Firmware Activation Without Reset: N/A 00:49:21.503 Multiple Update Detection Support: N/A 00:49:21.503 Firmware Update Granularity: No Information Provided 00:49:21.503 Per-Namespace SMART Log: No 00:49:21.503 Asymmetric Namespace Access Log Page: Not Supported 00:49:21.503 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:49:21.503 Command Effects Log Page: Supported 00:49:21.503 Get Log Page Extended Data: Supported 00:49:21.503 Telemetry Log Pages: Not Supported 00:49:21.503 Persistent Event Log Pages: Not Supported 00:49:21.503 Supported Log Pages Log Page: May Support 00:49:21.503 Commands Supported & Effects Log Page: Not Supported 00:49:21.503 Feature Identifiers & Effects Log Page:May Support 00:49:21.503 NVMe-MI Commands & Effects Log Page: May Support 00:49:21.503 Data Area 4 for Telemetry Log: Not Supported 00:49:21.503 Error Log Page Entries Supported: 128 00:49:21.503 Keep Alive: Supported 00:49:21.503 Keep Alive Granularity: 10000 ms 00:49:21.503 00:49:21.503 NVM Command Set Attributes 00:49:21.503 ========================== 00:49:21.503 Submission Queue Entry Size 00:49:21.503 Max: 64 00:49:21.503 Min: 64 00:49:21.503 Completion Queue Entry Size 00:49:21.503 Max: 16 00:49:21.503 Min: 16 00:49:21.503 Number of Namespaces: 32 00:49:21.503 Compare Command: Supported 00:49:21.503 Write Uncorrectable Command: Not Supported 00:49:21.503 Dataset Management Command: Supported 00:49:21.503 Write Zeroes Command: Supported 00:49:21.503 Set Features Save Field: Not Supported 00:49:21.503 Reservations: Supported 00:49:21.503 Timestamp: Not Supported 00:49:21.503 Copy: Supported 00:49:21.503 Volatile Write Cache: Present 00:49:21.503 Atomic Write Unit (Normal): 1 00:49:21.503 Atomic Write Unit (PFail): 1 00:49:21.503 Atomic Compare & Write Unit: 1 00:49:21.503 Fused Compare & Write: Supported 00:49:21.503 Scatter-Gather List 00:49:21.503 SGL Command Set: Supported 00:49:21.503 SGL Keyed: Supported 00:49:21.503 SGL Bit Bucket Descriptor: Not Supported 00:49:21.503 SGL Metadata Pointer: Not Supported 00:49:21.503 Oversized SGL: Not Supported 00:49:21.503 SGL Metadata Address: Not Supported 00:49:21.503 SGL Offset: Supported 00:49:21.503 Transport SGL Data Block: Not Supported 00:49:21.503 Replay Protected Memory Block: Not Supported 00:49:21.503 00:49:21.503 Firmware Slot Information 00:49:21.503 ========================= 00:49:21.503 Active slot: 1 00:49:21.503 Slot 1 Firmware Revision: 25.01 00:49:21.503 00:49:21.503 00:49:21.503 Commands Supported and Effects 00:49:21.503 ============================== 00:49:21.503 Admin Commands 00:49:21.503 -------------- 00:49:21.504 Get Log Page (02h): Supported 00:49:21.504 Identify (06h): Supported 00:49:21.504 Abort (08h): Supported 00:49:21.504 Set Features (09h): Supported 00:49:21.504 Get Features (0Ah): Supported 00:49:21.504 Asynchronous Event Request (0Ch): Supported 00:49:21.504 Keep Alive (18h): Supported 00:49:21.504 I/O Commands 00:49:21.504 ------------ 00:49:21.504 Flush (00h): Supported LBA-Change 00:49:21.504 Write (01h): Supported LBA-Change 00:49:21.504 Read (02h): Supported 00:49:21.504 Compare (05h): Supported 00:49:21.504 Write Zeroes (08h): Supported LBA-Change 00:49:21.504 Dataset Management (09h): Supported LBA-Change 00:49:21.504 Copy (19h): Supported LBA-Change 00:49:21.504 00:49:21.504 Error Log 00:49:21.504 ========= 00:49:21.504 00:49:21.504 Arbitration 00:49:21.504 =========== 00:49:21.504 Arbitration Burst: 1 00:49:21.504 00:49:21.504 Power Management 00:49:21.504 ================ 00:49:21.504 Number of Power States: 1 00:49:21.504 Current Power State: Power State #0 00:49:21.504 Power State #0: 00:49:21.504 Max Power: 0.00 W 00:49:21.504 Non-Operational State: Operational 00:49:21.504 Entry Latency: Not Reported 00:49:21.504 Exit Latency: Not Reported 00:49:21.504 Relative Read Throughput: 0 00:49:21.504 Relative Read Latency: 0 00:49:21.504 Relative Write Throughput: 0 00:49:21.504 Relative Write Latency: 0 00:49:21.504 Idle Power: Not Reported 00:49:21.504 Active Power: Not Reported 00:49:21.504 Non-Operational Permissive Mode: Not Supported 00:49:21.504 00:49:21.504 Health Information 00:49:21.504 ================== 00:49:21.504 Critical Warnings: 00:49:21.504 Available Spare Space: OK 00:49:21.504 Temperature: OK 00:49:21.504 Device Reliability: OK 00:49:21.504 Read Only: No 00:49:21.504 Volatile Memory Backup: OK 00:49:21.504 Current Temperature: 0 Kelvin (-273 Celsius) 00:49:21.504 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:49:21.504 Available Spare: 0% 00:49:21.504 Available Spare Threshold: 0% 00:49:21.504 Life Percentage Used:[2024-12-09 10:01:28.882676] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:49:21.504 [2024-12-09 10:01:28.882688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:49:21.504 [2024-12-09 10:01:28.882698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.882703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.882708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.504 [2024-12-09 10:01:28.882715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.504 [2024-12-09 10:01:28.882735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.504 [2024-12-09 10:01:28.882780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.504 [2024-12-09 10:01:28.882787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.504 [2024-12-09 10:01:28.882791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.882795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.504 [2024-12-09 10:01:28.882801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:49:21.504 [2024-12-09 10:01:28.882812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.882817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.882822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.504 [2024-12-09 10:01:28.882829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.504 [2024-12-09 10:01:28.882847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.504 [2024-12-09 10:01:28.882891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.504 [2024-12-09 10:01:28.882898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.504 [2024-12-09 10:01:28.882902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.882906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.504 [2024-12-09 10:01:28.882912] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:49:21.504 [2024-12-09 10:01:28.882918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:49:21.504 [2024-12-09 10:01:28.882926] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:49:21.504 [2024-12-09 10:01:28.882938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:49:21.504 [2024-12-09 10:01:28.882950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.882954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.504 [2024-12-09 10:01:28.882983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.504 [2024-12-09 10:01:28.883007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.504 [2024-12-09 10:01:28.883110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.504 [2024-12-09 10:01:28.883119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.504 [2024-12-09 10:01:28.883123] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883127] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a3750): datao=0, datal=4096, cccid=0 00:49:21.504 [2024-12-09 10:01:28.883134] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1207740) on tqpair(0x11a3750): expected_datao=0, payload_size=4096 00:49:21.504 [2024-12-09 10:01:28.883139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883148] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883153] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.504 [2024-12-09 10:01:28.883168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.504 [2024-12-09 10:01:28.883172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.504 [2024-12-09 10:01:28.883186] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:49:21.504 [2024-12-09 10:01:28.883192] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:49:21.504 [2024-12-09 10:01:28.883197] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:49:21.504 [2024-12-09 10:01:28.883202] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:49:21.504 [2024-12-09 10:01:28.883212] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:49:21.504 [2024-12-09 10:01:28.883218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:49:21.504 [2024-12-09 10:01:28.883229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:49:21.504 [2024-12-09 10:01:28.883238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.504 [2024-12-09 10:01:28.883255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:49:21.504 [2024-12-09 10:01:28.883277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.504 [2024-12-09 10:01:28.883325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.504 [2024-12-09 10:01:28.883333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.504 [2024-12-09 10:01:28.883337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.504 [2024-12-09 10:01:28.883354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a3750) 00:49:21.504 [2024-12-09 10:01:28.883371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.504 [2024-12-09 10:01:28.883378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11a3750) 00:49:21.504 [2024-12-09 10:01:28.883393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.504 [2024-12-09 10:01:28.883399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11a3750) 00:49:21.504 [2024-12-09 10:01:28.883415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.504 [2024-12-09 10:01:28.883422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.504 [2024-12-09 10:01:28.883437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.504 [2024-12-09 10:01:28.883442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:49:21.504 [2024-12-09 10:01:28.883452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:49:21.504 [2024-12-09 10:01:28.883460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.504 [2024-12-09 10:01:28.883465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a3750) 00:49:21.505 [2024-12-09 10:01:28.883472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.505 [2024-12-09 10:01:28.883493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207740, cid 0, qid 0 00:49:21.505 [2024-12-09 10:01:28.883501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12078c0, cid 1, qid 0 00:49:21.505 [2024-12-09 10:01:28.883506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207a40, cid 2, qid 0 00:49:21.505 [2024-12-09 10:01:28.883511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.505 [2024-12-09 10:01:28.883516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207d40, cid 4, qid 0 00:49:21.505 [2024-12-09 10:01:28.883600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.505 [2024-12-09 10:01:28.883607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.505 [2024-12-09 10:01:28.883611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.883616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207d40) on tqpair=0x11a3750 00:49:21.505 [2024-12-09 10:01:28.883626] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:49:21.505 [2024-12-09 10:01:28.883633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.883643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.883650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.883658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.883662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.883667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a3750) 00:49:21.505 [2024-12-09 10:01:28.883674] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:49:21.505 [2024-12-09 10:01:28.883694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207d40, cid 4, qid 0 00:49:21.505 [2024-12-09 10:01:28.883748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.505 [2024-12-09 10:01:28.883755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.505 [2024-12-09 10:01:28.883759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.883764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207d40) on tqpair=0x11a3750 00:49:21.505 [2024-12-09 10:01:28.883833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.883846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.883856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.883860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a3750) 00:49:21.505 [2024-12-09 10:01:28.883868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.505 [2024-12-09 10:01:28.883888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207d40, cid 4, qid 0 00:49:21.505 [2024-12-09 10:01:28.883948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.505 [2024-12-09 10:01:28.883969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.505 [2024-12-09 10:01:28.883975] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.883980] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a3750): datao=0, datal=4096, cccid=4 00:49:21.505 [2024-12-09 10:01:28.883985] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1207d40) on tqpair(0x11a3750): expected_datao=0, payload_size=4096 00:49:21.505 [2024-12-09 10:01:28.883990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.883998] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884003] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.505 [2024-12-09 10:01:28.884018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.505 [2024-12-09 10:01:28.884022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207d40) on tqpair=0x11a3750 00:49:21.505 [2024-12-09 10:01:28.884057] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:49:21.505 [2024-12-09 10:01:28.884078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a3750) 00:49:21.505 [2024-12-09 10:01:28.884121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.505 [2024-12-09 10:01:28.884146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207d40, cid 4, qid 0 00:49:21.505 [2024-12-09 10:01:28.884244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.505 [2024-12-09 10:01:28.884252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.505 [2024-12-09 10:01:28.884256] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884260] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a3750): datao=0, datal=4096, cccid=4 00:49:21.505 [2024-12-09 10:01:28.884265] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1207d40) on tqpair(0x11a3750): expected_datao=0, payload_size=4096 00:49:21.505 [2024-12-09 10:01:28.884270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884278] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884283] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.505 [2024-12-09 10:01:28.884299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.505 [2024-12-09 10:01:28.884303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207d40) on tqpair=0x11a3750 00:49:21.505 [2024-12-09 10:01:28.884324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a3750) 00:49:21.505 [2024-12-09 10:01:28.884358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.505 [2024-12-09 10:01:28.884380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207d40, cid 4, qid 0 00:49:21.505 [2024-12-09 10:01:28.884442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.505 [2024-12-09 10:01:28.884449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.505 [2024-12-09 10:01:28.884453] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884457] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a3750): datao=0, datal=4096, cccid=4 00:49:21.505 [2024-12-09 10:01:28.884462] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1207d40) on tqpair(0x11a3750): expected_datao=0, payload_size=4096 00:49:21.505 [2024-12-09 10:01:28.884467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884475] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884479] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.505 [2024-12-09 10:01:28.884494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.505 [2024-12-09 10:01:28.884498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207d40) on tqpair=0x11a3750 00:49:21.505 [2024-12-09 10:01:28.884512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884561] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:49:21.505 [2024-12-09 10:01:28.884567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:49:21.505 [2024-12-09 10:01:28.884574] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:49:21.505 [2024-12-09 10:01:28.884591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a3750) 00:49:21.505 [2024-12-09 10:01:28.884604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.505 [2024-12-09 10:01:28.884612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.505 [2024-12-09 10:01:28.884621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a3750) 00:49:21.505 [2024-12-09 10:01:28.884628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.506 [2024-12-09 10:01:28.884654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207d40, cid 4, qid 0 00:49:21.506 [2024-12-09 10:01:28.884662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207ec0, cid 5, qid 0 00:49:21.506 [2024-12-09 10:01:28.884724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.884731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.884735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.884740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207d40) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.884747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.884754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.884758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.884762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207ec0) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.884773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.884778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a3750) 00:49:21.506 [2024-12-09 10:01:28.884786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.506 [2024-12-09 10:01:28.884804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207ec0, cid 5, qid 0 00:49:21.506 [2024-12-09 10:01:28.884853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.884860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.884864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.884868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207ec0) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.884879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.884884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a3750) 00:49:21.506 [2024-12-09 10:01:28.884891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.506 [2024-12-09 10:01:28.884909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207ec0, cid 5, qid 0 00:49:21.506 [2024-12-09 10:01:28.884984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.884993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.884997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207ec0) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.885014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a3750) 00:49:21.506 [2024-12-09 10:01:28.885027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.506 [2024-12-09 10:01:28.885048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207ec0, cid 5, qid 0 00:49:21.506 [2024-12-09 10:01:28.885096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.885103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.885107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207ec0) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.885130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a3750) 00:49:21.506 [2024-12-09 10:01:28.885144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.506 [2024-12-09 10:01:28.885152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a3750) 00:49:21.506 [2024-12-09 10:01:28.885163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.506 [2024-12-09 10:01:28.885171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11a3750) 00:49:21.506 [2024-12-09 10:01:28.885182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.506 [2024-12-09 10:01:28.885193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11a3750) 00:49:21.506 [2024-12-09 10:01:28.885205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.506 [2024-12-09 10:01:28.885226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207ec0, cid 5, qid 0 00:49:21.506 [2024-12-09 10:01:28.885234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207d40, cid 4, qid 0 00:49:21.506 [2024-12-09 10:01:28.885239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1208040, cid 6, qid 0 00:49:21.506 [2024-12-09 10:01:28.885244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12081c0, cid 7, qid 0 00:49:21.506 [2024-12-09 10:01:28.885380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.506 [2024-12-09 10:01:28.885388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.506 [2024-12-09 10:01:28.885392] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885396] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a3750): datao=0, datal=8192, cccid=5 00:49:21.506 [2024-12-09 10:01:28.885401] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1207ec0) on tqpair(0x11a3750): expected_datao=0, payload_size=8192 00:49:21.506 [2024-12-09 10:01:28.885406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885423] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885428] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.506 [2024-12-09 10:01:28.885441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.506 [2024-12-09 10:01:28.885444] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885450] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a3750): datao=0, datal=512, cccid=4 00:49:21.506 [2024-12-09 10:01:28.885455] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1207d40) on tqpair(0x11a3750): expected_datao=0, payload_size=512 00:49:21.506 [2024-12-09 10:01:28.885460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885467] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885471] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.506 [2024-12-09 10:01:28.885483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.506 [2024-12-09 10:01:28.885487] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885491] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a3750): datao=0, datal=512, cccid=6 00:49:21.506 [2024-12-09 10:01:28.885496] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1208040) on tqpair(0x11a3750): expected_datao=0, payload_size=512 00:49:21.506 [2024-12-09 10:01:28.885500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885507] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885511] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:21.506 [2024-12-09 10:01:28.885523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:21.506 [2024-12-09 10:01:28.885527] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885531] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a3750): datao=0, datal=4096, cccid=7 00:49:21.506 [2024-12-09 10:01:28.885536] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12081c0) on tqpair(0x11a3750): expected_datao=0, payload_size=4096 00:49:21.506 [2024-12-09 10:01:28.885541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885548] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885552] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.885567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.885571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207ec0) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.885591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.885599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.885603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207d40) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.885619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.885626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.885630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1208040) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.885642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.885648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.885652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12081c0) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.885764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11a3750) 00:49:21.506 [2024-12-09 10:01:28.885781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.506 [2024-12-09 10:01:28.885805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12081c0, cid 7, qid 0 00:49:21.506 [2024-12-09 10:01:28.885859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.506 [2024-12-09 10:01:28.885867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.506 [2024-12-09 10:01:28.885871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.506 [2024-12-09 10:01:28.885875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12081c0) on tqpair=0x11a3750 00:49:21.506 [2024-12-09 10:01:28.885915] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:49:21.507 [2024-12-09 10:01:28.885927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207740) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.885935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.507 [2024-12-09 10:01:28.885941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12078c0) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.885946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.507 [2024-12-09 10:01:28.885952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207a40) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.889977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.507 [2024-12-09 10:01:28.889988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.889994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.507 [2024-12-09 10:01:28.890006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.507 [2024-12-09 10:01:28.890025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.507 [2024-12-09 10:01:28.890055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.507 [2024-12-09 10:01:28.890108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.507 [2024-12-09 10:01:28.890116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.507 [2024-12-09 10:01:28.890120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.890133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.507 [2024-12-09 10:01:28.890150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.507 [2024-12-09 10:01:28.890172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.507 [2024-12-09 10:01:28.890248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.507 [2024-12-09 10:01:28.890255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.507 [2024-12-09 10:01:28.890259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.890269] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:49:21.507 [2024-12-09 10:01:28.890274] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:49:21.507 [2024-12-09 10:01:28.890285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.507 [2024-12-09 10:01:28.890302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.507 [2024-12-09 10:01:28.890320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.507 [2024-12-09 10:01:28.890364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.507 [2024-12-09 10:01:28.890371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.507 [2024-12-09 10:01:28.890375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.890390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.507 [2024-12-09 10:01:28.890407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.507 [2024-12-09 10:01:28.890424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.507 [2024-12-09 10:01:28.890474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.507 [2024-12-09 10:01:28.890487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.507 [2024-12-09 10:01:28.890491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.890508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.507 [2024-12-09 10:01:28.890524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.507 [2024-12-09 10:01:28.890543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.507 [2024-12-09 10:01:28.890592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.507 [2024-12-09 10:01:28.890603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.507 [2024-12-09 10:01:28.890608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.890624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.507 [2024-12-09 10:01:28.890640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.507 [2024-12-09 10:01:28.890659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.507 [2024-12-09 10:01:28.890706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.507 [2024-12-09 10:01:28.890717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.507 [2024-12-09 10:01:28.890722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.890738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.507 [2024-12-09 10:01:28.890755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.507 [2024-12-09 10:01:28.890773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.507 [2024-12-09 10:01:28.890820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.507 [2024-12-09 10:01:28.890832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.507 [2024-12-09 10:01:28.890837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.507 [2024-12-09 10:01:28.890852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.507 [2024-12-09 10:01:28.890862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.507 [2024-12-09 10:01:28.890869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.508 [2024-12-09 10:01:28.890888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.508 [2024-12-09 10:01:28.890930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.508 [2024-12-09 10:01:28.890937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.508 [2024-12-09 10:01:28.890941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.890946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.508 [2024-12-09 10:01:28.890978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.890987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.890991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.508 [2024-12-09 10:01:28.890999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.508 [2024-12-09 10:01:28.891024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.508 [2024-12-09 10:01:28.891069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.508 [2024-12-09 10:01:28.891080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.508 [2024-12-09 10:01:28.891084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.508 [2024-12-09 10:01:28.891100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.508 [2024-12-09 10:01:28.891117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.508 [2024-12-09 10:01:28.891136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.508 [2024-12-09 10:01:28.891179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.508 [2024-12-09 10:01:28.891190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.508 [2024-12-09 10:01:28.891194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.508 [2024-12-09 10:01:28.891210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.508 [2024-12-09 10:01:28.891227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.508 [2024-12-09 10:01:28.891246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.508 [2024-12-09 10:01:28.891289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.508 [2024-12-09 10:01:28.891301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.508 [2024-12-09 10:01:28.891306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.508 [2024-12-09 10:01:28.891322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.508 [2024-12-09 10:01:28.891338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.508 [2024-12-09 10:01:28.891357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.508 [2024-12-09 10:01:28.891406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.508 [2024-12-09 10:01:28.891413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.508 [2024-12-09 10:01:28.891417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.508 [2024-12-09 10:01:28.891432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.508 [2024-12-09 10:01:28.891449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.508 [2024-12-09 10:01:28.891465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.508 [2024-12-09 10:01:28.891520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.508 [2024-12-09 10:01:28.891527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.508 [2024-12-09 10:01:28.891531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.508 [2024-12-09 10:01:28.891546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.508 [2024-12-09 10:01:28.891562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.508 [2024-12-09 10:01:28.891579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.508 [2024-12-09 10:01:28.891634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.508 [2024-12-09 10:01:28.891641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.508 [2024-12-09 10:01:28.891645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.508 [2024-12-09 10:01:28.891660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.508 [2024-12-09 10:01:28.891677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.508 [2024-12-09 10:01:28.891694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.508 [2024-12-09 10:01:28.891738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.508 [2024-12-09 10:01:28.891746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.508 [2024-12-09 10:01:28.891750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.508 [2024-12-09 10:01:28.891765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.508 [2024-12-09 10:01:28.891782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.508 [2024-12-09 10:01:28.891800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.508 [2024-12-09 10:01:28.891846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.508 [2024-12-09 10:01:28.891853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.508 [2024-12-09 10:01:28.891857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.508 [2024-12-09 10:01:28.891861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.508 [2024-12-09 10:01:28.891872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.891877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.891881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.509 [2024-12-09 10:01:28.891889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.509 [2024-12-09 10:01:28.891906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.509 [2024-12-09 10:01:28.891952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.509 [2024-12-09 10:01:28.891977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.509 [2024-12-09 10:01:28.891982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.891987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.509 [2024-12-09 10:01:28.891999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.509 [2024-12-09 10:01:28.892016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.509 [2024-12-09 10:01:28.892037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.509 [2024-12-09 10:01:28.892093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.509 [2024-12-09 10:01:28.892103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.509 [2024-12-09 10:01:28.892107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.509 [2024-12-09 10:01:28.892123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.509 [2024-12-09 10:01:28.892139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.509 [2024-12-09 10:01:28.892160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.509 [2024-12-09 10:01:28.892204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.509 [2024-12-09 10:01:28.892211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.509 [2024-12-09 10:01:28.892215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.509 [2024-12-09 10:01:28.892230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.509 [2024-12-09 10:01:28.892247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.509 [2024-12-09 10:01:28.892264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.509 [2024-12-09 10:01:28.892307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.509 [2024-12-09 10:01:28.892320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.509 [2024-12-09 10:01:28.892325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.509 [2024-12-09 10:01:28.892340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.509 [2024-12-09 10:01:28.892357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.509 [2024-12-09 10:01:28.892376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.509 [2024-12-09 10:01:28.892422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.509 [2024-12-09 10:01:28.892433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.509 [2024-12-09 10:01:28.892438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.509 [2024-12-09 10:01:28.892454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.509 [2024-12-09 10:01:28.892471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.509 [2024-12-09 10:01:28.892489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.509 [2024-12-09 10:01:28.892536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.509 [2024-12-09 10:01:28.892542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.509 [2024-12-09 10:01:28.892546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.509 [2024-12-09 10:01:28.892561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.509 [2024-12-09 10:01:28.892578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.509 [2024-12-09 10:01:28.892595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.509 [2024-12-09 10:01:28.892643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.509 [2024-12-09 10:01:28.892651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.509 [2024-12-09 10:01:28.892655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.509 [2024-12-09 10:01:28.892671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.509 [2024-12-09 10:01:28.892688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.509 [2024-12-09 10:01:28.892706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.509 [2024-12-09 10:01:28.892755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.509 [2024-12-09 10:01:28.892762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.509 [2024-12-09 10:01:28.892766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.509 [2024-12-09 10:01:28.892781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.509 [2024-12-09 10:01:28.892790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.509 [2024-12-09 10:01:28.892798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.509 [2024-12-09 10:01:28.892815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.509 [2024-12-09 10:01:28.892864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.509 [2024-12-09 10:01:28.892871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.510 [2024-12-09 10:01:28.892875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.892879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.510 [2024-12-09 10:01:28.892890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.892895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.892899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.510 [2024-12-09 10:01:28.892907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.510 [2024-12-09 10:01:28.892923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.510 [2024-12-09 10:01:28.892988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.510 [2024-12-09 10:01:28.892997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.510 [2024-12-09 10:01:28.893001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.510 [2024-12-09 10:01:28.893017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.510 [2024-12-09 10:01:28.893034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.510 [2024-12-09 10:01:28.893054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.510 [2024-12-09 10:01:28.893100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.510 [2024-12-09 10:01:28.893107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.510 [2024-12-09 10:01:28.893111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.510 [2024-12-09 10:01:28.893126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.510 [2024-12-09 10:01:28.893143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.510 [2024-12-09 10:01:28.893160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.510 [2024-12-09 10:01:28.893209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.510 [2024-12-09 10:01:28.893216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.510 [2024-12-09 10:01:28.893220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.510 [2024-12-09 10:01:28.893235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.510 [2024-12-09 10:01:28.893252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.510 [2024-12-09 10:01:28.893269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.510 [2024-12-09 10:01:28.893316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.510 [2024-12-09 10:01:28.893322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.510 [2024-12-09 10:01:28.893326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.510 [2024-12-09 10:01:28.893342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.510 [2024-12-09 10:01:28.893358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.510 [2024-12-09 10:01:28.893375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.510 [2024-12-09 10:01:28.893427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.510 [2024-12-09 10:01:28.893434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.510 [2024-12-09 10:01:28.893438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.510 [2024-12-09 10:01:28.893453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.510 [2024-12-09 10:01:28.893470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.510 [2024-12-09 10:01:28.893487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.510 [2024-12-09 10:01:28.893540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.510 [2024-12-09 10:01:28.893547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.510 [2024-12-09 10:01:28.893551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.510 [2024-12-09 10:01:28.893566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.510 [2024-12-09 10:01:28.893582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.510 [2024-12-09 10:01:28.893599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.510 [2024-12-09 10:01:28.893645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.510 [2024-12-09 10:01:28.893652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.510 [2024-12-09 10:01:28.893656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.510 [2024-12-09 10:01:28.893671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.510 [2024-12-09 10:01:28.893688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.510 [2024-12-09 10:01:28.893714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.510 [2024-12-09 10:01:28.893757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.510 [2024-12-09 10:01:28.893764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.510 [2024-12-09 10:01:28.893768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.510 [2024-12-09 10:01:28.893783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.510 [2024-12-09 10:01:28.893792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.510 [2024-12-09 10:01:28.893799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.510 [2024-12-09 10:01:28.893816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.511 [2024-12-09 10:01:28.893865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.511 [2024-12-09 10:01:28.893872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.511 [2024-12-09 10:01:28.893876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.511 [2024-12-09 10:01:28.893880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.511 [2024-12-09 10:01:28.893891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.511 [2024-12-09 10:01:28.893896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.511 [2024-12-09 10:01:28.893900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.511 [2024-12-09 10:01:28.893908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.511 [2024-12-09 10:01:28.893925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.511 [2024-12-09 10:01:28.897977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.511 [2024-12-09 10:01:28.897997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.511 [2024-12-09 10:01:28.898003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.511 [2024-12-09 10:01:28.898008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.511 [2024-12-09 10:01:28.898023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:21.511 [2024-12-09 10:01:28.898029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:21.511 [2024-12-09 10:01:28.898033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a3750) 00:49:21.511 [2024-12-09 10:01:28.898043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:21.511 [2024-12-09 10:01:28.898070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1207bc0, cid 3, qid 0 00:49:21.511 [2024-12-09 10:01:28.898118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:21.511 [2024-12-09 10:01:28.898126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:21.511 [2024-12-09 10:01:28.898130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:21.511 [2024-12-09 10:01:28.898134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1207bc0) on tqpair=0x11a3750 00:49:21.511 [2024-12-09 10:01:28.898143] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:49:21.810 0% 00:49:21.810 Data Units Read: 0 00:49:21.810 Data Units Written: 0 00:49:21.810 Host Read Commands: 0 00:49:21.810 Host Write Commands: 0 00:49:21.810 Controller Busy Time: 0 minutes 00:49:21.810 Power Cycles: 0 00:49:21.810 Power On Hours: 0 hours 00:49:21.810 Unsafe Shutdowns: 0 00:49:21.810 Unrecoverable Media Errors: 0 00:49:21.810 Lifetime Error Log Entries: 0 00:49:21.810 Warning Temperature Time: 0 minutes 00:49:21.810 Critical Temperature Time: 0 minutes 00:49:21.810 00:49:21.810 Number of Queues 00:49:21.810 ================ 00:49:21.810 Number of I/O Submission Queues: 127 00:49:21.810 Number of I/O Completion Queues: 127 00:49:21.810 00:49:21.810 Active Namespaces 00:49:21.810 ================= 00:49:21.810 Namespace ID:1 00:49:21.810 Error Recovery Timeout: Unlimited 00:49:21.810 Command Set Identifier: NVM (00h) 00:49:21.810 Deallocate: Supported 00:49:21.810 Deallocated/Unwritten Error: Not Supported 00:49:21.810 Deallocated Read Value: Unknown 00:49:21.810 Deallocate in Write Zeroes: Not Supported 00:49:21.810 Deallocated Guard Field: 0xFFFF 00:49:21.810 Flush: Supported 00:49:21.810 Reservation: Supported 00:49:21.810 Namespace Sharing Capabilities: Multiple Controllers 00:49:21.810 Size (in LBAs): 131072 (0GiB) 00:49:21.810 Capacity (in LBAs): 131072 (0GiB) 00:49:21.810 Utilization (in LBAs): 131072 (0GiB) 00:49:21.810 NGUID: ABCDEF0123456789ABCDEF0123456789 00:49:21.810 EUI64: ABCDEF0123456789 00:49:21.810 UUID: 465fe57c-37c5-4108-a727-0f276f3bbb9b 00:49:21.810 Thin Provisioning: Not Supported 00:49:21.810 Per-NS Atomic Units: Yes 00:49:21.810 Atomic Boundary Size (Normal): 0 00:49:21.810 Atomic Boundary Size (PFail): 0 00:49:21.810 Atomic Boundary Offset: 0 00:49:21.810 Maximum Single Source Range Length: 65535 00:49:21.810 Maximum Copy Length: 65535 00:49:21.810 Maximum Source Range Count: 1 00:49:21.810 NGUID/EUI64 Never Reused: No 00:49:21.810 Namespace Write Protected: No 00:49:21.810 Number of LBA Formats: 1 00:49:21.810 Current LBA Format: LBA Format #00 00:49:21.810 LBA Format #00: Data Size: 512 Metadata Size: 0 00:49:21.810 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:21.810 rmmod nvme_tcp 00:49:21.810 rmmod nvme_fabrics 00:49:21.810 rmmod nvme_keyring 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74148 ']' 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74148 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74148 ']' 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74148 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74148 00:49:21.810 killing process with pid 74148 00:49:21.810 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:21.811 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:21.811 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74148' 00:49:21.811 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74148 00:49:21.811 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74148 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:22.069 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:49:22.328 00:49:22.328 real 0m2.357s 00:49:22.328 user 0m5.103s 00:49:22.328 sys 0m0.656s 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:22.328 ************************************ 00:49:22.328 END TEST nvmf_identify 00:49:22.328 ************************************ 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:22.328 ************************************ 00:49:22.328 START TEST nvmf_perf 00:49:22.328 ************************************ 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:49:22.328 * Looking for test storage... 00:49:22.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:49:22.328 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:22.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:22.329 --rc genhtml_branch_coverage=1 00:49:22.329 --rc genhtml_function_coverage=1 00:49:22.329 --rc genhtml_legend=1 00:49:22.329 --rc geninfo_all_blocks=1 00:49:22.329 --rc geninfo_unexecuted_blocks=1 00:49:22.329 00:49:22.329 ' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:22.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:22.329 --rc genhtml_branch_coverage=1 00:49:22.329 --rc genhtml_function_coverage=1 00:49:22.329 --rc genhtml_legend=1 00:49:22.329 --rc geninfo_all_blocks=1 00:49:22.329 --rc geninfo_unexecuted_blocks=1 00:49:22.329 00:49:22.329 ' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:22.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:22.329 --rc genhtml_branch_coverage=1 00:49:22.329 --rc genhtml_function_coverage=1 00:49:22.329 --rc genhtml_legend=1 00:49:22.329 --rc geninfo_all_blocks=1 00:49:22.329 --rc geninfo_unexecuted_blocks=1 00:49:22.329 00:49:22.329 ' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:22.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:22.329 --rc genhtml_branch_coverage=1 00:49:22.329 --rc genhtml_function_coverage=1 00:49:22.329 --rc genhtml_legend=1 00:49:22.329 --rc geninfo_all_blocks=1 00:49:22.329 --rc geninfo_unexecuted_blocks=1 00:49:22.329 00:49:22.329 ' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:22.329 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:22.329 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:22.588 Cannot find device "nvmf_init_br" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:22.588 Cannot find device "nvmf_init_br2" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:22.588 Cannot find device "nvmf_tgt_br" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:22.588 Cannot find device "nvmf_tgt_br2" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:22.588 Cannot find device "nvmf_init_br" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:22.588 Cannot find device "nvmf_init_br2" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:22.588 Cannot find device "nvmf_tgt_br" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:22.588 Cannot find device "nvmf_tgt_br2" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:22.588 Cannot find device "nvmf_br" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:22.588 Cannot find device "nvmf_init_if" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:22.588 Cannot find device "nvmf_init_if2" 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:22.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:22.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:22.588 10:01:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:22.588 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:22.588 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:22.588 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:22.588 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:22.588 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:22.588 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:22.588 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:22.588 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:22.588 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:22.847 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:22.847 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:49:22.847 00:49:22.847 --- 10.0.0.3 ping statistics --- 00:49:22.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:22.847 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:22.847 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:22.847 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:49:22.847 00:49:22.847 --- 10.0.0.4 ping statistics --- 00:49:22.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:22.847 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:22.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:22.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:49:22.847 00:49:22.847 --- 10.0.0.1 ping statistics --- 00:49:22.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:22.847 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:22.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:22.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:49:22.847 00:49:22.847 --- 10.0.0.2 ping statistics --- 00:49:22.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:22.847 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74397 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:22.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74397 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74397 ']' 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:22.847 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:49:22.847 [2024-12-09 10:01:30.293952] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:22.847 [2024-12-09 10:01:30.294041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:23.106 [2024-12-09 10:01:30.447317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:23.106 [2024-12-09 10:01:30.486860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:23.106 [2024-12-09 10:01:30.487178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:23.106 [2024-12-09 10:01:30.487386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:23.106 [2024-12-09 10:01:30.487602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:23.106 [2024-12-09 10:01:30.487718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:23.106 [2024-12-09 10:01:30.488710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:23.106 [2024-12-09 10:01:30.488779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:23.106 [2024-12-09 10:01:30.488851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:49:23.106 [2024-12-09 10:01:30.488860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:23.106 [2024-12-09 10:01:30.543602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:24.039 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:24.039 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:49:24.039 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:24.039 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:24.039 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:49:24.039 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:24.039 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:49:24.039 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:49:24.604 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:49:24.604 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:49:24.604 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:49:24.604 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:49:25.170 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:49:25.170 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:49:25.170 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:49:25.170 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:49:25.170 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:49:25.428 [2024-12-09 10:01:32.727762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:25.428 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:25.686 10:01:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:49:25.686 10:01:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:25.944 10:01:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:49:25.944 10:01:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:49:26.201 10:01:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:49:26.459 [2024-12-09 10:01:33.805120] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:26.459 10:01:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:49:26.719 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:49:26.719 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:49:26.719 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:49:26.719 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:49:28.097 Initializing NVMe Controllers 00:49:28.097 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:49:28.097 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:49:28.097 Initialization complete. Launching workers. 00:49:28.097 ======================================================== 00:49:28.097 Latency(us) 00:49:28.097 Device Information : IOPS MiB/s Average min max 00:49:28.097 PCIE (0000:00:10.0) NSID 1 from core 0: 23839.98 93.12 1342.44 348.25 6854.69 00:49:28.097 ======================================================== 00:49:28.097 Total : 23839.98 93.12 1342.44 348.25 6854.69 00:49:28.097 00:49:28.097 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:49:29.496 Initializing NVMe Controllers 00:49:29.496 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:49:29.496 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:49:29.496 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:49:29.496 Initialization complete. Launching workers. 00:49:29.496 ======================================================== 00:49:29.496 Latency(us) 00:49:29.496 Device Information : IOPS MiB/s Average min max 00:49:29.496 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3540.43 13.83 281.07 108.92 5080.02 00:49:29.496 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.88 0.48 8136.35 4990.76 12028.33 00:49:29.496 ======================================================== 00:49:29.496 Total : 3664.30 14.31 546.63 108.92 12028.33 00:49:29.496 00:49:29.497 10:01:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:49:30.873 Initializing NVMe Controllers 00:49:30.874 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:49:30.874 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:49:30.874 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:49:30.874 Initialization complete. Launching workers. 00:49:30.874 ======================================================== 00:49:30.874 Latency(us) 00:49:30.874 Device Information : IOPS MiB/s Average min max 00:49:30.874 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8514.34 33.26 3776.97 699.88 8593.20 00:49:30.874 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3965.69 15.49 8108.23 5741.28 16462.25 00:49:30.874 ======================================================== 00:49:30.874 Total : 12480.03 48.75 5153.29 699.88 16462.25 00:49:30.874 00:49:30.874 10:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:49:30.874 10:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:49:33.406 Initializing NVMe Controllers 00:49:33.406 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:49:33.406 Controller IO queue size 128, less than required. 00:49:33.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:33.406 Controller IO queue size 128, less than required. 00:49:33.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:33.406 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:49:33.406 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:49:33.406 Initialization complete. Launching workers. 00:49:33.406 ======================================================== 00:49:33.406 Latency(us) 00:49:33.406 Device Information : IOPS MiB/s Average min max 00:49:33.406 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1847.59 461.90 70567.68 42313.27 109162.60 00:49:33.406 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 666.99 166.75 195165.10 62650.56 305448.10 00:49:33.406 ======================================================== 00:49:33.406 Total : 2514.58 628.64 103617.06 42313.27 305448.10 00:49:33.406 00:49:33.664 10:01:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:49:33.923 Initializing NVMe Controllers 00:49:33.923 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:49:33.923 Controller IO queue size 128, less than required. 00:49:33.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:33.923 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:49:33.923 Controller IO queue size 128, less than required. 00:49:33.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:33.923 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:49:33.923 WARNING: Some requested NVMe devices were skipped 00:49:33.923 No valid NVMe controllers or AIO or URING devices found 00:49:33.923 10:01:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:49:36.454 Initializing NVMe Controllers 00:49:36.454 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:49:36.454 Controller IO queue size 128, less than required. 00:49:36.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:36.455 Controller IO queue size 128, less than required. 00:49:36.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:36.455 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:49:36.455 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:49:36.455 Initialization complete. Launching workers. 00:49:36.455 00:49:36.455 ==================== 00:49:36.455 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:49:36.455 TCP transport: 00:49:36.455 polls: 10833 00:49:36.455 idle_polls: 5645 00:49:36.455 sock_completions: 5188 00:49:36.455 nvme_completions: 6847 00:49:36.455 submitted_requests: 10138 00:49:36.455 queued_requests: 1 00:49:36.455 00:49:36.455 ==================== 00:49:36.455 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:49:36.455 TCP transport: 00:49:36.455 polls: 10919 00:49:36.455 idle_polls: 6132 00:49:36.455 sock_completions: 4787 00:49:36.455 nvme_completions: 6805 00:49:36.455 submitted_requests: 10264 00:49:36.455 queued_requests: 1 00:49:36.455 ======================================================== 00:49:36.455 Latency(us) 00:49:36.455 Device Information : IOPS MiB/s Average min max 00:49:36.455 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1708.76 427.19 76504.68 42320.01 105741.40 00:49:36.455 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1698.28 424.57 76111.13 24855.14 144302.46 00:49:36.455 ======================================================== 00:49:36.455 Total : 3407.04 851.76 76308.51 24855.14 144302.46 00:49:36.455 00:49:36.455 10:01:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:49:36.713 10:01:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:36.971 rmmod nvme_tcp 00:49:36.971 rmmod nvme_fabrics 00:49:36.971 rmmod nvme_keyring 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74397 ']' 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74397 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74397 ']' 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74397 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74397 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74397' 00:49:36.971 killing process with pid 74397 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74397 00:49:36.971 10:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74397 00:49:37.537 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:37.537 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:37.537 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:37.537 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:49:37.537 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:49:37.537 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:37.537 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:49:37.793 00:49:37.793 real 0m15.667s 00:49:37.793 user 0m57.006s 00:49:37.793 sys 0m3.999s 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:37.793 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:49:37.793 ************************************ 00:49:37.793 END TEST nvmf_perf 00:49:37.793 ************************************ 00:49:38.051 10:01:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:49:38.051 10:01:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:38.051 10:01:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:38.051 10:01:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:38.051 ************************************ 00:49:38.051 START TEST nvmf_fio_host 00:49:38.051 ************************************ 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:49:38.052 * Looking for test storage... 00:49:38.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:38.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:38.052 --rc genhtml_branch_coverage=1 00:49:38.052 --rc genhtml_function_coverage=1 00:49:38.052 --rc genhtml_legend=1 00:49:38.052 --rc geninfo_all_blocks=1 00:49:38.052 --rc geninfo_unexecuted_blocks=1 00:49:38.052 00:49:38.052 ' 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:38.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:38.052 --rc genhtml_branch_coverage=1 00:49:38.052 --rc genhtml_function_coverage=1 00:49:38.052 --rc genhtml_legend=1 00:49:38.052 --rc geninfo_all_blocks=1 00:49:38.052 --rc geninfo_unexecuted_blocks=1 00:49:38.052 00:49:38.052 ' 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:38.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:38.052 --rc genhtml_branch_coverage=1 00:49:38.052 --rc genhtml_function_coverage=1 00:49:38.052 --rc genhtml_legend=1 00:49:38.052 --rc geninfo_all_blocks=1 00:49:38.052 --rc geninfo_unexecuted_blocks=1 00:49:38.052 00:49:38.052 ' 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:38.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:38.052 --rc genhtml_branch_coverage=1 00:49:38.052 --rc genhtml_function_coverage=1 00:49:38.052 --rc genhtml_legend=1 00:49:38.052 --rc geninfo_all_blocks=1 00:49:38.052 --rc geninfo_unexecuted_blocks=1 00:49:38.052 00:49:38.052 ' 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:38.052 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:38.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:38.053 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:38.311 Cannot find device "nvmf_init_br" 00:49:38.311 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:49:38.311 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:38.311 Cannot find device "nvmf_init_br2" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:38.312 Cannot find device "nvmf_tgt_br" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:38.312 Cannot find device "nvmf_tgt_br2" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:38.312 Cannot find device "nvmf_init_br" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:38.312 Cannot find device "nvmf_init_br2" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:38.312 Cannot find device "nvmf_tgt_br" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:38.312 Cannot find device "nvmf_tgt_br2" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:38.312 Cannot find device "nvmf_br" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:38.312 Cannot find device "nvmf_init_if" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:38.312 Cannot find device "nvmf_init_if2" 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:38.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:38.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:38.312 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:38.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:38.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:49:38.571 00:49:38.571 --- 10.0.0.3 ping statistics --- 00:49:38.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:38.571 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:38.571 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:38.571 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:49:38.571 00:49:38.571 --- 10.0.0.4 ping statistics --- 00:49:38.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:38.571 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:38.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:38.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:49:38.571 00:49:38.571 --- 10.0.0.1 ping statistics --- 00:49:38.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:38.571 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:38.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:38.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:49:38.571 00:49:38.571 --- 10.0.0.2 ping statistics --- 00:49:38.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:38.571 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74870 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74870 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74870 ']' 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:38.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:38.571 10:01:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:49:38.571 [2024-12-09 10:01:46.018685] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:38.571 [2024-12-09 10:01:46.018785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:38.830 [2024-12-09 10:01:46.173648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:38.830 [2024-12-09 10:01:46.208567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:38.830 [2024-12-09 10:01:46.208923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:38.830 [2024-12-09 10:01:46.209156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:38.830 [2024-12-09 10:01:46.209265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:38.830 [2024-12-09 10:01:46.209331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:38.830 [2024-12-09 10:01:46.210374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:38.830 [2024-12-09 10:01:46.210438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:38.830 [2024-12-09 10:01:46.210892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:49:38.830 [2024-12-09 10:01:46.210902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:38.830 [2024-12-09 10:01:46.242506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:39.766 10:01:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:39.766 10:01:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:49:39.766 10:01:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:49:40.025 [2024-12-09 10:01:47.296618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:40.025 10:01:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:49:40.025 10:01:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:40.025 10:01:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:49:40.025 10:01:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:49:40.284 Malloc1 00:49:40.284 10:01:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:40.543 10:01:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:49:40.801 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:49:41.059 [2024-12-09 10:01:48.462979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:41.059 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:49:41.318 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:49:41.318 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:49:41.318 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:49:41.318 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:49:41.318 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:49:41.318 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:49:41.319 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:49:41.608 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:49:41.608 fio-3.35 00:49:41.608 Starting 1 thread 00:49:44.163 00:49:44.163 test: (groupid=0, jobs=1): err= 0: pid=74954: Mon Dec 9 10:01:51 2024 00:49:44.163 read: IOPS=8476, BW=33.1MiB/s (34.7MB/s)(66.5MiB/2007msec) 00:49:44.163 slat (usec): min=2, max=318, avg= 2.73, stdev= 3.18 00:49:44.163 clat (usec): min=2543, max=14112, avg=7838.32, stdev=595.66 00:49:44.163 lat (usec): min=2594, max=14115, avg=7841.05, stdev=595.34 00:49:44.163 clat percentiles (usec): 00:49:44.163 | 1.00th=[ 6718], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7439], 00:49:44.163 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7898], 00:49:44.163 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:49:44.163 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[11863], 99.95th=[13042], 00:49:44.163 | 99.99th=[14091] 00:49:44.163 bw ( KiB/s): min=32648, max=34800, per=99.95%, avg=33892.00, stdev=937.80, samples=4 00:49:44.163 iops : min= 8162, max= 8700, avg=8473.00, stdev=234.45, samples=4 00:49:44.163 write: IOPS=8476, BW=33.1MiB/s (34.7MB/s)(66.5MiB/2007msec); 0 zone resets 00:49:44.163 slat (usec): min=2, max=253, avg= 2.89, stdev= 2.21 00:49:44.163 clat (usec): min=2396, max=12893, avg=7168.68, stdev=535.16 00:49:44.163 lat (usec): min=2410, max=12895, avg=7171.57, stdev=534.95 00:49:44.163 clat percentiles (usec): 00:49:44.163 | 1.00th=[ 6194], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6783], 00:49:44.163 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7111], 60.00th=[ 7242], 00:49:44.163 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7898], 00:49:44.163 | 99.00th=[ 9110], 99.50th=[ 9634], 99.90th=[10814], 99.95th=[11469], 00:49:44.163 | 99.99th=[12780] 00:49:44.163 bw ( KiB/s): min=33480, max=34496, per=100.00%, avg=33906.00, stdev=426.25, samples=4 00:49:44.163 iops : min= 8370, max= 8624, avg=8476.50, stdev=106.56, samples=4 00:49:44.163 lat (msec) : 4=0.14%, 10=99.32%, 20=0.54% 00:49:44.163 cpu : usr=68.30%, sys=23.33%, ctx=7, majf=0, minf=7 00:49:44.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:49:44.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:44.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:49:44.163 issued rwts: total=17013,17013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:44.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:49:44.163 00:49:44.163 Run status group 0 (all jobs): 00:49:44.163 READ: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=66.5MiB (69.7MB), run=2007-2007msec 00:49:44.163 WRITE: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=66.5MiB (69.7MB), run=2007-2007msec 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:49:44.163 10:01:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:49:44.163 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:49:44.163 fio-3.35 00:49:44.163 Starting 1 thread 00:49:46.694 00:49:46.694 test: (groupid=0, jobs=1): err= 0: pid=74997: Mon Dec 9 10:01:53 2024 00:49:46.694 read: IOPS=7953, BW=124MiB/s (130MB/s)(250MiB/2011msec) 00:49:46.694 slat (usec): min=3, max=212, avg= 3.98, stdev= 2.49 00:49:46.694 clat (usec): min=1894, max=18880, avg=8875.81, stdev=2771.42 00:49:46.694 lat (usec): min=1898, max=18883, avg=8879.80, stdev=2771.59 00:49:46.694 clat percentiles (usec): 00:49:46.694 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6390], 00:49:46.694 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9241], 00:49:46.694 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12518], 95.00th=[14353], 00:49:46.694 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:49:46.694 | 99.99th=[18744] 00:49:46.694 bw ( KiB/s): min=60160, max=69312, per=50.70%, avg=64512.00, stdev=3752.61, samples=4 00:49:46.694 iops : min= 3760, max= 4332, avg=4032.00, stdev=234.54, samples=4 00:49:46.694 write: IOPS=4571, BW=71.4MiB/s (74.9MB/s)(132MiB/1845msec); 0 zone resets 00:49:46.694 slat (usec): min=37, max=332, avg=40.07, stdev= 7.31 00:49:46.694 clat (usec): min=5977, max=24539, avg=12769.41, stdev=2445.09 00:49:46.694 lat (usec): min=6015, max=24590, avg=12809.48, stdev=2446.32 00:49:46.694 clat percentiles (usec): 00:49:46.694 | 1.00th=[ 7963], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10814], 00:49:46.694 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12518], 60.00th=[13042], 00:49:46.694 | 70.00th=[13698], 80.00th=[14615], 90.00th=[15926], 95.00th=[17433], 00:49:46.694 | 99.00th=[19792], 99.50th=[21365], 99.90th=[23200], 99.95th=[23987], 00:49:46.694 | 99.99th=[24511] 00:49:46.695 bw ( KiB/s): min=63520, max=72512, per=91.98%, avg=67280.00, stdev=3768.08, samples=4 00:49:46.695 iops : min= 3970, max= 4532, avg=4205.00, stdev=235.51, samples=4 00:49:46.695 lat (msec) : 2=0.01%, 4=0.29%, 10=48.02%, 20=51.37%, 50=0.32% 00:49:46.695 cpu : usr=81.84%, sys=13.43%, ctx=5, majf=0, minf=10 00:49:46.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:49:46.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:46.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:49:46.695 issued rwts: total=15994,8435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:46.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:49:46.695 00:49:46.695 Run status group 0 (all jobs): 00:49:46.695 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=250MiB (262MB), run=2011-2011msec 00:49:46.695 WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=132MiB (138MB), run=1845-1845msec 00:49:46.695 10:01:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:46.953 rmmod nvme_tcp 00:49:46.953 rmmod nvme_fabrics 00:49:46.953 rmmod nvme_keyring 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74870 ']' 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74870 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74870 ']' 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74870 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74870 00:49:46.953 killing process with pid 74870 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74870' 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74870 00:49:46.953 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74870 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:47.212 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:49:47.470 00:49:47.470 real 0m9.552s 00:49:47.470 user 0m38.182s 00:49:47.470 sys 0m2.389s 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:49:47.470 ************************************ 00:49:47.470 END TEST nvmf_fio_host 00:49:47.470 ************************************ 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:47.470 10:01:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:47.470 ************************************ 00:49:47.470 START TEST nvmf_failover 00:49:47.471 ************************************ 00:49:47.471 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:49:47.731 * Looking for test storage... 00:49:47.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:49:47.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:47.731 --rc genhtml_branch_coverage=1 00:49:47.731 --rc genhtml_function_coverage=1 00:49:47.731 --rc genhtml_legend=1 00:49:47.731 --rc geninfo_all_blocks=1 00:49:47.731 --rc geninfo_unexecuted_blocks=1 00:49:47.731 00:49:47.731 ' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:49:47.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:47.731 --rc genhtml_branch_coverage=1 00:49:47.731 --rc genhtml_function_coverage=1 00:49:47.731 --rc genhtml_legend=1 00:49:47.731 --rc geninfo_all_blocks=1 00:49:47.731 --rc geninfo_unexecuted_blocks=1 00:49:47.731 00:49:47.731 ' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:49:47.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:47.731 --rc genhtml_branch_coverage=1 00:49:47.731 --rc genhtml_function_coverage=1 00:49:47.731 --rc genhtml_legend=1 00:49:47.731 --rc geninfo_all_blocks=1 00:49:47.731 --rc geninfo_unexecuted_blocks=1 00:49:47.731 00:49:47.731 ' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:49:47.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:47.731 --rc genhtml_branch_coverage=1 00:49:47.731 --rc genhtml_function_coverage=1 00:49:47.731 --rc genhtml_legend=1 00:49:47.731 --rc geninfo_all_blocks=1 00:49:47.731 --rc geninfo_unexecuted_blocks=1 00:49:47.731 00:49:47.731 ' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:47.731 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:47.731 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:47.732 Cannot find device "nvmf_init_br" 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:47.732 Cannot find device "nvmf_init_br2" 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:47.732 Cannot find device "nvmf_tgt_br" 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:47.732 Cannot find device "nvmf_tgt_br2" 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:47.732 Cannot find device "nvmf_init_br" 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:47.732 Cannot find device "nvmf_init_br2" 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:49:47.732 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:47.732 Cannot find device "nvmf_tgt_br" 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:47.991 Cannot find device "nvmf_tgt_br2" 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:47.991 Cannot find device "nvmf_br" 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:47.991 Cannot find device "nvmf_init_if" 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:47.991 Cannot find device "nvmf_init_if2" 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:47.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:47.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:47.991 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:47.992 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:47.992 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:49:47.992 00:49:47.992 --- 10.0.0.3 ping statistics --- 00:49:47.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:47.992 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:47.992 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:47.992 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:49:47.992 00:49:47.992 --- 10.0.0.4 ping statistics --- 00:49:47.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:47.992 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:47.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:47.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:49:47.992 00:49:47.992 --- 10.0.0.1 ping statistics --- 00:49:47.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:47.992 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:47.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:47.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:49:47.992 00:49:47.992 --- 10.0.0.2 ping statistics --- 00:49:47.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:47.992 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:47.992 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75275 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75275 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75275 ']' 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:48.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:48.251 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:49:48.251 [2024-12-09 10:01:55.555277] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:49:48.251 [2024-12-09 10:01:55.555748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:48.251 [2024-12-09 10:01:55.704703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:49:48.251 [2024-12-09 10:01:55.742935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:48.251 [2024-12-09 10:01:55.742998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:48.251 [2024-12-09 10:01:55.743012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:48.251 [2024-12-09 10:01:55.743020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:48.252 [2024-12-09 10:01:55.743028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:48.252 [2024-12-09 10:01:55.743808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:48.252 [2024-12-09 10:01:55.743897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:49:48.252 [2024-12-09 10:01:55.743901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:48.510 [2024-12-09 10:01:55.775600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:49:48.510 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:48.510 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:49:48.510 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:48.510 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:48.510 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:49:48.510 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:48.510 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:49:48.769 [2024-12-09 10:01:56.154529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:48.769 10:01:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:49:49.027 Malloc0 00:49:49.027 10:01:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:49.285 10:01:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:49.543 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:49:49.800 [2024-12-09 10:01:57.293368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:50.057 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:49:50.057 [2024-12-09 10:01:57.553623] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:49:50.315 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:49:50.575 [2024-12-09 10:01:57.845933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75325 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75325 /var/tmp/bdevperf.sock 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75325 ']' 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:49:50.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:50.575 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:49:50.834 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:50.834 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:49:50.834 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:49:51.092 NVMe0n1 00:49:51.092 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:49:51.351 00:49:51.351 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75337 00:49:51.351 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:49:51.351 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:49:52.742 10:01:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:49:52.742 [2024-12-09 10:02:00.124127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.742 [2024-12-09 10:02:00.124614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.124994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 [2024-12-09 10:02:00.125153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949cf0 is same with the state(6) to be set 00:49:52.743 10:02:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:49:56.026 10:02:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:49:56.026 00:49:56.026 10:02:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:49:56.594 10:02:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:49:59.878 10:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:49:59.878 [2024-12-09 10:02:07.126969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:59.878 10:02:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:50:00.812 10:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:50:01.071 10:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75337 00:50:07.665 { 00:50:07.665 "results": [ 00:50:07.665 { 00:50:07.665 "job": "NVMe0n1", 00:50:07.665 "core_mask": "0x1", 00:50:07.665 "workload": "verify", 00:50:07.665 "status": "finished", 00:50:07.665 "verify_range": { 00:50:07.665 "start": 0, 00:50:07.665 "length": 16384 00:50:07.665 }, 00:50:07.665 "queue_depth": 128, 00:50:07.665 "io_size": 4096, 00:50:07.665 "runtime": 15.0083, 00:50:07.665 "iops": 8642.151342923582, 00:50:07.665 "mibps": 33.75840368329524, 00:50:07.665 "io_failed": 3253, 00:50:07.665 "io_timeout": 0, 00:50:07.665 "avg_latency_us": 14414.842749515052, 00:50:07.665 "min_latency_us": 633.0181818181818, 00:50:07.665 "max_latency_us": 18469.236363636363 00:50:07.665 } 00:50:07.665 ], 00:50:07.665 "core_count": 1 00:50:07.665 } 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75325 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75325 ']' 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75325 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75325 00:50:07.665 killing process with pid 75325 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75325' 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75325 00:50:07.665 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75325 00:50:07.665 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:50:07.665 [2024-12-09 10:01:57.914139] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:50:07.665 [2024-12-09 10:01:57.914238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75325 ] 00:50:07.665 [2024-12-09 10:01:58.061773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:07.665 [2024-12-09 10:01:58.094012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:07.665 [2024-12-09 10:01:58.123159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:50:07.665 Running I/O for 15 seconds... 00:50:07.665 6436.00 IOPS, 25.14 MiB/s [2024-12-09T10:02:15.167Z] [2024-12-09 10:02:00.125222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.665 [2024-12-09 10:02:00.125941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.665 [2024-12-09 10:02:00.125969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.125990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.126967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.126985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.127001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.127015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.127030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.127044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.127060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.127074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.127089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.127103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.127119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.127132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.127148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.127162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.127177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.127191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.666 [2024-12-09 10:02:00.127207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.666 [2024-12-09 10:02:00.127220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.127982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.127996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.667 [2024-12-09 10:02:00.128465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.667 [2024-12-09 10:02:00.128484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.128839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.128877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.128908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.128937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.128980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.128997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:00.129312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.668 [2024-12-09 10:02:00.129342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dfa80 is same with the state(6) to be set 00:50:07.668 [2024-12-09 10:02:00.129376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.668 [2024-12-09 10:02:00.129388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.668 [2024-12-09 10:02:00.129400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:50:07.668 [2024-12-09 10:02:00.129414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129466] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:50:07.668 [2024-12-09 10:02:00.129526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.668 [2024-12-09 10:02:00.129547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.668 [2024-12-09 10:02:00.129577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.668 [2024-12-09 10:02:00.129604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.668 [2024-12-09 10:02:00.129632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:00.129646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:50:07.668 [2024-12-09 10:02:00.133700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:50:07.668 [2024-12-09 10:02:00.133744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2070c60 (9): Bad file descriptor 00:50:07.668 [2024-12-09 10:02:00.157218] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:50:07.668 7245.50 IOPS, 28.30 MiB/s [2024-12-09T10:02:15.170Z] 7806.33 IOPS, 30.49 MiB/s [2024-12-09T10:02:15.170Z] 8080.75 IOPS, 31.57 MiB/s [2024-12-09T10:02:15.170Z] [2024-12-09 10:02:03.799130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:03.799203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:03.799261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:03.799279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.668 [2024-12-09 10:02:03.799296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.668 [2024-12-09 10:02:03.799311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.799340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.799370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.799399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.799428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.799458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.669 [2024-12-09 10:02:03.799940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.799969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.799987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.669 [2024-12-09 10:02:03.800331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.669 [2024-12-09 10:02:03.800346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.800360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.800389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.800424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.800455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.800929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.800970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.800988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.670 [2024-12-09 10:02:03.801424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.801453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.801482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.801511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.670 [2024-12-09 10:02:03.801526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.670 [2024-12-09 10:02:03.801540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.801575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.801605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.801634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.801664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.801975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.801991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.671 [2024-12-09 10:02:03.802648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.671 [2024-12-09 10:02:03.802787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.671 [2024-12-09 10:02:03.802800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.802816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:03.802830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.802845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:03.802860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.802876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:03.802890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.802905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:03.802919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.802935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:03.802949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.802977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:03.802992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:03.803021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:03.803050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:03.803079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:03.803108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3950 is same with the state(6) to be set 00:50:07.672 [2024-12-09 10:02:03.803148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.672 [2024-12-09 10:02:03.803159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.672 [2024-12-09 10:02:03.803169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66432 len:8 PRP1 0x0 PRP2 0x0 00:50:07.672 [2024-12-09 10:02:03.803182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803235] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:50:07.672 [2024-12-09 10:02:03.803294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.672 [2024-12-09 10:02:03.803315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.672 [2024-12-09 10:02:03.803345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.672 [2024-12-09 10:02:03.803372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.672 [2024-12-09 10:02:03.803399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:03.803413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:50:07.672 [2024-12-09 10:02:03.807421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:50:07.672 [2024-12-09 10:02:03.807463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2070c60 (9): Bad file descriptor 00:50:07.672 [2024-12-09 10:02:03.838868] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:50:07.672 8149.00 IOPS, 31.83 MiB/s [2024-12-09T10:02:15.174Z] 8274.83 IOPS, 32.32 MiB/s [2024-12-09T10:02:15.174Z] 8369.14 IOPS, 32.69 MiB/s [2024-12-09T10:02:15.174Z] 8439.12 IOPS, 32.97 MiB/s [2024-12-09T10:02:15.174Z] 8495.22 IOPS, 33.18 MiB/s [2024-12-09T10:02:15.174Z] [2024-12-09 10:02:08.423481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.672 [2024-12-09 10:02:08.423918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:08.423947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.423979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:08.423995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.424011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:08.424024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.424040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:08.424053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.424080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:08.424105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.424124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:08.424138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.424154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:08.424167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.424183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:08.424196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.424219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.672 [2024-12-09 10:02:08.424233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.672 [2024-12-09 10:02:08.424248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.424261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.424291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.424320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.424349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.424379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.424408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.424437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.424976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.424993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.425007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.425036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.425065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.425095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.425125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.425154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.425184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.425213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.425242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.673 [2024-12-09 10:02:08.425279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.425309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.425339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.425369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.425397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.425427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.425456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.673 [2024-12-09 10:02:08.425486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.673 [2024-12-09 10:02:08.425502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.425785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.425814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.425844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.425875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.425905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.425934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.425976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.425994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.426075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.426104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.426134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.426164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.426193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.426222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.426258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:07.674 [2024-12-09 10:02:08.426288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.674 [2024-12-09 10:02:08.426562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.674 [2024-12-09 10:02:08.426578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.675 [2024-12-09 10:02:08.426591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.675 [2024-12-09 10:02:08.426620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.675 [2024-12-09 10:02:08.426649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.675 [2024-12-09 10:02:08.426678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.675 [2024-12-09 10:02:08.426708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:07.675 [2024-12-09 10:02:08.426741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1690 is same with the state(6) to be set 00:50:07.675 [2024-12-09 10:02:08.426773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.426783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.426794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9648 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.426807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.426840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.426853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10104 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.426867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.426891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.426902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.426915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.426938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.426949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10120 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.426974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.426989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.426999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10128 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10136 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10152 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10160 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10168 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10184 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10192 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10200 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10216 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10224 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10232 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.675 [2024-12-09 10:02:08.427741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10248 len:8 PRP1 0x0 PRP2 0x0 00:50:07.675 [2024-12-09 10:02:08.427754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.675 [2024-12-09 10:02:08.427768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.675 [2024-12-09 10:02:08.427777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.676 [2024-12-09 10:02:08.427788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10256 len:8 PRP1 0x0 PRP2 0x0 00:50:07.676 [2024-12-09 10:02:08.427801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.676 [2024-12-09 10:02:08.427815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.676 [2024-12-09 10:02:08.427825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.676 [2024-12-09 10:02:08.427835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10264 len:8 PRP1 0x0 PRP2 0x0 00:50:07.676 [2024-12-09 10:02:08.427848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.676 [2024-12-09 10:02:08.427862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.676 [2024-12-09 10:02:08.427872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.676 [2024-12-09 10:02:08.427882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:8 PRP1 0x0 PRP2 0x0 00:50:07.676 [2024-12-09 10:02:08.427895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.676 [2024-12-09 10:02:08.427908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.676 [2024-12-09 10:02:08.427920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.676 [2024-12-09 10:02:08.427931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10280 len:8 PRP1 0x0 PRP2 0x0 00:50:07.676 [2024-12-09 10:02:08.427944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.676 [2024-12-09 10:02:08.427975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:07.676 [2024-12-09 10:02:08.427987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:07.676 [2024-12-09 10:02:08.427998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10288 len:8 PRP1 0x0 PRP2 0x0 00:50:07.676 [2024-12-09 10:02:08.428011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.676 [2024-12-09 10:02:08.428071] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:50:07.676 [2024-12-09 10:02:08.428132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.676 [2024-12-09 10:02:08.428157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.676 [2024-12-09 10:02:08.428174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.676 [2024-12-09 10:02:08.428187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.676 [2024-12-09 10:02:08.428202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.676 [2024-12-09 10:02:08.428215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.676 [2024-12-09 10:02:08.428230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:07.676 [2024-12-09 10:02:08.428243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:07.676 [2024-12-09 10:02:08.428257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:50:07.676 [2024-12-09 10:02:08.432250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:50:07.676 [2024-12-09 10:02:08.432293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2070c60 (9): Bad file descriptor 00:50:07.676 [2024-12-09 10:02:08.458668] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:50:07.676 8503.20 IOPS, 33.22 MiB/s [2024-12-09T10:02:15.178Z] 8541.09 IOPS, 33.36 MiB/s [2024-12-09T10:02:15.178Z] 8571.50 IOPS, 33.48 MiB/s [2024-12-09T10:02:15.178Z] 8597.54 IOPS, 33.58 MiB/s [2024-12-09T10:02:15.178Z] 8621.71 IOPS, 33.68 MiB/s [2024-12-09T10:02:15.178Z] 8641.60 IOPS, 33.76 MiB/s 00:50:07.676 Latency(us) 00:50:07.676 [2024-12-09T10:02:15.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:07.676 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:50:07.676 Verification LBA range: start 0x0 length 0x4000 00:50:07.676 NVMe0n1 : 15.01 8642.15 33.76 216.75 0.00 14414.84 633.02 18469.24 00:50:07.676 [2024-12-09T10:02:15.178Z] =================================================================================================================== 00:50:07.676 [2024-12-09T10:02:15.178Z] Total : 8642.15 33.76 216.75 0.00 14414.84 633.02 18469.24 00:50:07.676 Received shutdown signal, test time was about 15.000000 seconds 00:50:07.676 00:50:07.676 Latency(us) 00:50:07.676 [2024-12-09T10:02:15.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:07.676 [2024-12-09T10:02:15.178Z] =================================================================================================================== 00:50:07.676 [2024-12-09T10:02:15.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75515 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75515 /var/tmp/bdevperf.sock 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75515 ']' 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:07.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:50:07.676 [2024-12-09 10:02:14.738778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:50:07.676 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:50:07.676 [2024-12-09 10:02:14.987015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:50:07.676 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:50:07.934 NVMe0n1 00:50:07.934 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:50:08.501 00:50:08.501 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:50:08.760 00:50:08.760 10:02:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:08.760 10:02:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:50:09.018 10:02:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:50:09.276 10:02:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:50:12.561 10:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:12.561 10:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:50:12.561 10:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75592 00:50:12.561 10:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:50:12.561 10:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75592 00:50:13.937 { 00:50:13.937 "results": [ 00:50:13.937 { 00:50:13.937 "job": "NVMe0n1", 00:50:13.937 "core_mask": "0x1", 00:50:13.937 "workload": "verify", 00:50:13.937 "status": "finished", 00:50:13.937 "verify_range": { 00:50:13.937 "start": 0, 00:50:13.937 "length": 16384 00:50:13.937 }, 00:50:13.937 "queue_depth": 128, 00:50:13.937 "io_size": 4096, 00:50:13.937 "runtime": 1.004508, 00:50:13.937 "iops": 6647.035165474043, 00:50:13.937 "mibps": 25.964981115132982, 00:50:13.937 "io_failed": 0, 00:50:13.937 "io_timeout": 0, 00:50:13.937 "avg_latency_us": 19178.443814995848, 00:50:13.937 "min_latency_us": 2427.8109090909093, 00:50:13.937 "max_latency_us": 15609.483636363637 00:50:13.937 } 00:50:13.937 ], 00:50:13.937 "core_count": 1 00:50:13.937 } 00:50:13.937 10:02:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:50:13.937 [2024-12-09 10:02:14.233810] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:50:13.937 [2024-12-09 10:02:14.233914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75515 ] 00:50:13.937 [2024-12-09 10:02:14.391579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:13.937 [2024-12-09 10:02:14.430157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:13.937 [2024-12-09 10:02:14.463416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:50:13.937 [2024-12-09 10:02:16.649479] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:50:13.937 [2024-12-09 10:02:16.649595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:13.937 [2024-12-09 10:02:16.649621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:13.937 [2024-12-09 10:02:16.649641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:13.937 [2024-12-09 10:02:16.649655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:13.937 [2024-12-09 10:02:16.649669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:13.937 [2024-12-09 10:02:16.649683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:13.937 [2024-12-09 10:02:16.649697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:13.937 [2024-12-09 10:02:16.649711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:13.937 [2024-12-09 10:02:16.649725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:50:13.937 [2024-12-09 10:02:16.649776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:50:13.937 [2024-12-09 10:02:16.649808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131bc60 (9): Bad file descriptor 00:50:13.937 [2024-12-09 10:02:16.652270] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:50:13.937 Running I/O for 1 seconds... 00:50:13.937 6549.00 IOPS, 25.58 MiB/s 00:50:13.937 Latency(us) 00:50:13.937 [2024-12-09T10:02:21.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:13.937 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:50:13.937 Verification LBA range: start 0x0 length 0x4000 00:50:13.937 NVMe0n1 : 1.00 6647.04 25.96 0.00 0.00 19178.44 2427.81 15609.48 00:50:13.937 [2024-12-09T10:02:21.439Z] =================================================================================================================== 00:50:13.937 [2024-12-09T10:02:21.439Z] Total : 6647.04 25.96 0.00 0.00 19178.44 2427.81 15609.48 00:50:13.937 10:02:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:13.937 10:02:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:50:14.195 10:02:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:50:14.453 10:02:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:50:14.453 10:02:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:14.711 10:02:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:50:14.969 10:02:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75515 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75515 ']' 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75515 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75515 00:50:18.254 killing process with pid 75515 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75515' 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75515 00:50:18.254 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75515 00:50:18.513 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:50:18.513 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:50:18.772 rmmod nvme_tcp 00:50:18.772 rmmod nvme_fabrics 00:50:18.772 rmmod nvme_keyring 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75275 ']' 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75275 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75275 ']' 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75275 00:50:18.772 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75275 00:50:19.030 killing process with pid 75275 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75275' 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75275 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75275 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:50:19.030 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:50:19.031 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:50:19.031 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:50:19.031 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:50:19.031 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:50:19.031 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:50:19.031 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:50:19.031 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:50:19.290 00:50:19.290 real 0m31.809s 00:50:19.290 user 2m3.450s 00:50:19.290 sys 0m5.178s 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:50:19.290 ************************************ 00:50:19.290 END TEST nvmf_failover 00:50:19.290 ************************************ 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:19.290 10:02:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:50:19.550 ************************************ 00:50:19.550 START TEST nvmf_host_discovery 00:50:19.550 ************************************ 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:50:19.550 * Looking for test storage... 00:50:19.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:50:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:19.550 --rc genhtml_branch_coverage=1 00:50:19.550 --rc genhtml_function_coverage=1 00:50:19.550 --rc genhtml_legend=1 00:50:19.550 --rc geninfo_all_blocks=1 00:50:19.550 --rc geninfo_unexecuted_blocks=1 00:50:19.550 00:50:19.550 ' 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:50:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:19.550 --rc genhtml_branch_coverage=1 00:50:19.550 --rc genhtml_function_coverage=1 00:50:19.550 --rc genhtml_legend=1 00:50:19.550 --rc geninfo_all_blocks=1 00:50:19.550 --rc geninfo_unexecuted_blocks=1 00:50:19.550 00:50:19.550 ' 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:50:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:19.550 --rc genhtml_branch_coverage=1 00:50:19.550 --rc genhtml_function_coverage=1 00:50:19.550 --rc genhtml_legend=1 00:50:19.550 --rc geninfo_all_blocks=1 00:50:19.550 --rc geninfo_unexecuted_blocks=1 00:50:19.550 00:50:19.550 ' 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:50:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:19.550 --rc genhtml_branch_coverage=1 00:50:19.550 --rc genhtml_function_coverage=1 00:50:19.550 --rc genhtml_legend=1 00:50:19.550 --rc geninfo_all_blocks=1 00:50:19.550 --rc geninfo_unexecuted_blocks=1 00:50:19.550 00:50:19.550 ' 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:19.550 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:19.551 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:19.551 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:50:19.551 Cannot find device "nvmf_init_br" 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:50:19.551 Cannot find device "nvmf_init_br2" 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:50:19.551 Cannot find device "nvmf_tgt_br" 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:50:19.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:50:19.810 Cannot find device "nvmf_tgt_br2" 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:50:19.810 Cannot find device "nvmf_init_br" 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:50:19.810 Cannot find device "nvmf_init_br2" 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:50:19.810 Cannot find device "nvmf_tgt_br" 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:50:19.810 Cannot find device "nvmf_tgt_br2" 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:50:19.810 Cannot find device "nvmf_br" 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:50:19.810 Cannot find device "nvmf_init_if" 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:50:19.810 Cannot find device "nvmf_init_if2" 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:19.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:19.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:50:19.810 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:50:19.811 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:50:20.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:50:20.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:50:20.070 00:50:20.070 --- 10.0.0.3 ping statistics --- 00:50:20.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:20.070 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:50:20.070 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:50:20.070 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:50:20.070 00:50:20.070 --- 10.0.0.4 ping statistics --- 00:50:20.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:20.070 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:50:20.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:20.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:50:20.070 00:50:20.070 --- 10.0.0.1 ping statistics --- 00:50:20.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:20.070 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:50:20.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:20.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:50:20.070 00:50:20.070 --- 10.0.0.2 ping statistics --- 00:50:20.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:20.070 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75911 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75911 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75911 ']' 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:20.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:20.070 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:20.070 [2024-12-09 10:02:27.501760] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:50:20.070 [2024-12-09 10:02:27.501866] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:20.329 [2024-12-09 10:02:27.655839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:20.329 [2024-12-09 10:02:27.689229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:20.329 [2024-12-09 10:02:27.689285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:20.329 [2024-12-09 10:02:27.689317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:20.329 [2024-12-09 10:02:27.689342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:20.329 [2024-12-09 10:02:27.689363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:20.329 [2024-12-09 10:02:27.689681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:20.329 [2024-12-09 10:02:27.721166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.265 [2024-12-09 10:02:28.495566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.265 [2024-12-09 10:02:28.503687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.265 null0 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.265 null1 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.265 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75943 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75943 /tmp/host.sock 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75943 ']' 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:21.265 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.265 [2024-12-09 10:02:28.592459] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:50:21.265 [2024-12-09 10:02:28.592773] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75943 ] 00:50:21.265 [2024-12-09 10:02:28.746702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:21.524 [2024-12-09 10:02:28.785186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:21.524 [2024-12-09 10:02:28.814914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.524 10:02:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.524 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.524 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:50:21.524 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:21.524 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:21.524 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.524 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.524 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:21.524 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:21.524 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.783 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.784 [2024-12-09 10:02:29.235882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:21.784 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:22.042 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:50:22.043 10:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:50:22.610 [2024-12-09 10:02:29.888654] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:50:22.610 [2024-12-09 10:02:29.888683] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:50:22.610 [2024-12-09 10:02:29.888739] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:50:22.610 [2024-12-09 10:02:29.894697] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:50:22.610 [2024-12-09 10:02:29.949152] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:50:22.610 [2024-12-09 10:02:29.950104] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x142ada0:1 started. 00:50:22.610 [2024-12-09 10:02:29.951958] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:50:22.610 [2024-12-09 10:02:29.951996] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:50:22.610 [2024-12-09 10:02:29.957212] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x142ada0 was disconnected and freed. delete nvme_qpair. 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.177 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.436 [2024-12-09 10:02:30.700853] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1439190:1 started. 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:23.436 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.437 [2024-12-09 10:02:30.707529] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1439190 was disconnected and freed. delete nvme_qpair. 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.437 [2024-12-09 10:02:30.817646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:50:23.437 [2024-12-09 10:02:30.817830] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:50:23.437 [2024-12-09 10:02:30.817859] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:50:23.437 [2024-12-09 10:02:30.823833] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.437 [2024-12-09 10:02:30.882307] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:50:23.437 [2024-12-09 10:02:30.882370] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:50:23.437 [2024-12-09 10:02:30.882381] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:50:23.437 [2024-12-09 10:02:30.882403] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.437 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.697 10:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.697 [2024-12-09 10:02:31.046206] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:50:23.697 [2024-12-09 10:02:31.046242] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:50:23.697 [2024-12-09 10:02:31.049267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:23.697 [2024-12-09 10:02:31.049307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:23.697 [2024-12-09 10:02:31.049323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:23.697 [2024-12-09 10:02:31.049333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:23.697 [2024-12-09 10:02:31.049344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:23.697 [2024-12-09 10:02:31.049354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:23.697 [2024-12-09 10:02:31.049363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:23.697 [2024-12-09 10:02:31.049372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:23.697 [2024-12-09 10:02:31.049382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1406fb0 is same with the state(6) to be set 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:50:23.697 [2024-12-09 10:02:31.052218] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:50:23.697 [2024-12-09 10:02:31.052252] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:50:23.697 [2024-12-09 10:02:31.052308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1406fb0 (9): Bad file descriptor 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:50:23.697 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:50:23.957 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:50:23.958 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:23.958 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:23.958 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:23.958 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:23.958 10:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:25.334 [2024-12-09 10:02:32.459480] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:50:25.334 [2024-12-09 10:02:32.459514] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:50:25.334 [2024-12-09 10:02:32.459550] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:50:25.334 [2024-12-09 10:02:32.465518] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:50:25.334 [2024-12-09 10:02:32.523861] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:50:25.334 [2024-12-09 10:02:32.524594] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x142dc00:1 started. 00:50:25.334 [2024-12-09 10:02:32.526606] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:50:25.334 [2024-12-09 10:02:32.526667] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:50:25.334 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:25.334 [2024-12-09 10:02:32.528293] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x142dc00 was disconnected and freed. delete nvme_qpair. 00:50:25.334 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:25.335 request: 00:50:25.335 { 00:50:25.335 "name": "nvme", 00:50:25.335 "trtype": "tcp", 00:50:25.335 "traddr": "10.0.0.3", 00:50:25.335 "adrfam": "ipv4", 00:50:25.335 "trsvcid": "8009", 00:50:25.335 "hostnqn": "nqn.2021-12.io.spdk:test", 00:50:25.335 "wait_for_attach": true, 00:50:25.335 "method": "bdev_nvme_start_discovery", 00:50:25.335 "req_id": 1 00:50:25.335 } 00:50:25.335 Got JSON-RPC error response 00:50:25.335 response: 00:50:25.335 { 00:50:25.335 "code": -17, 00:50:25.335 "message": "File exists" 00:50:25.335 } 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:25.335 request: 00:50:25.335 { 00:50:25.335 "name": "nvme_second", 00:50:25.335 "trtype": "tcp", 00:50:25.335 "traddr": "10.0.0.3", 00:50:25.335 "adrfam": "ipv4", 00:50:25.335 "trsvcid": "8009", 00:50:25.335 "hostnqn": "nqn.2021-12.io.spdk:test", 00:50:25.335 "wait_for_attach": true, 00:50:25.335 "method": "bdev_nvme_start_discovery", 00:50:25.335 "req_id": 1 00:50:25.335 } 00:50:25.335 Got JSON-RPC error response 00:50:25.335 response: 00:50:25.335 { 00:50:25.335 "code": -17, 00:50:25.335 "message": "File exists" 00:50:25.335 } 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:25.335 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:26.722 [2024-12-09 10:02:33.803010] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:50:26.722 [2024-12-09 10:02:33.803081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x142abb0 with addr=10.0.0.3, port=8010 00:50:26.722 [2024-12-09 10:02:33.803106] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:50:26.722 [2024-12-09 10:02:33.803117] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:50:26.722 [2024-12-09 10:02:33.803127] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:50:27.317 [2024-12-09 10:02:34.803008] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:50:27.317 [2024-12-09 10:02:34.803088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143a5d0 with addr=10.0.0.3, port=8010 00:50:27.317 [2024-12-09 10:02:34.803111] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:50:27.317 [2024-12-09 10:02:34.803123] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:50:27.317 [2024-12-09 10:02:34.803132] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:50:28.694 [2024-12-09 10:02:35.802841] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:50:28.694 request: 00:50:28.694 { 00:50:28.694 "name": "nvme_second", 00:50:28.694 "trtype": "tcp", 00:50:28.694 "traddr": "10.0.0.3", 00:50:28.694 "adrfam": "ipv4", 00:50:28.694 "trsvcid": "8010", 00:50:28.694 "hostnqn": "nqn.2021-12.io.spdk:test", 00:50:28.694 "wait_for_attach": false, 00:50:28.694 "attach_timeout_ms": 3000, 00:50:28.694 "method": "bdev_nvme_start_discovery", 00:50:28.694 "req_id": 1 00:50:28.694 } 00:50:28.694 Got JSON-RPC error response 00:50:28.694 response: 00:50:28.694 { 00:50:28.694 "code": -110, 00:50:28.694 "message": "Connection timed out" 00:50:28.694 } 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75943 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:50:28.694 rmmod nvme_tcp 00:50:28.694 rmmod nvme_fabrics 00:50:28.694 rmmod nvme_keyring 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75911 ']' 00:50:28.694 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75911 00:50:28.695 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75911 ']' 00:50:28.695 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75911 00:50:28.695 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:50:28.695 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:28.695 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75911 00:50:28.695 killing process with pid 75911 00:50:28.695 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:50:28.695 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:50:28.695 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75911' 00:50:28.695 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75911 00:50:28.695 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75911 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:50:28.954 00:50:28.954 real 0m9.630s 00:50:28.954 user 0m17.921s 00:50:28.954 sys 0m1.930s 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:28.954 ************************************ 00:50:28.954 END TEST nvmf_host_discovery 00:50:28.954 ************************************ 00:50:28.954 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:50:29.212 ************************************ 00:50:29.212 START TEST nvmf_host_multipath_status 00:50:29.212 ************************************ 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:50:29.212 * Looking for test storage... 00:50:29.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:50:29.212 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:50:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:29.213 --rc genhtml_branch_coverage=1 00:50:29.213 --rc genhtml_function_coverage=1 00:50:29.213 --rc genhtml_legend=1 00:50:29.213 --rc geninfo_all_blocks=1 00:50:29.213 --rc geninfo_unexecuted_blocks=1 00:50:29.213 00:50:29.213 ' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:50:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:29.213 --rc genhtml_branch_coverage=1 00:50:29.213 --rc genhtml_function_coverage=1 00:50:29.213 --rc genhtml_legend=1 00:50:29.213 --rc geninfo_all_blocks=1 00:50:29.213 --rc geninfo_unexecuted_blocks=1 00:50:29.213 00:50:29.213 ' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:50:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:29.213 --rc genhtml_branch_coverage=1 00:50:29.213 --rc genhtml_function_coverage=1 00:50:29.213 --rc genhtml_legend=1 00:50:29.213 --rc geninfo_all_blocks=1 00:50:29.213 --rc geninfo_unexecuted_blocks=1 00:50:29.213 00:50:29.213 ' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:50:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:29.213 --rc genhtml_branch_coverage=1 00:50:29.213 --rc genhtml_function_coverage=1 00:50:29.213 --rc genhtml_legend=1 00:50:29.213 --rc geninfo_all_blocks=1 00:50:29.213 --rc geninfo_unexecuted_blocks=1 00:50:29.213 00:50:29.213 ' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:29.213 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:50:29.213 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:50:29.214 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:50:29.472 Cannot find device "nvmf_init_br" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:50:29.472 Cannot find device "nvmf_init_br2" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:50:29.472 Cannot find device "nvmf_tgt_br" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:50:29.472 Cannot find device "nvmf_tgt_br2" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:50:29.472 Cannot find device "nvmf_init_br" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:50:29.472 Cannot find device "nvmf_init_br2" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:50:29.472 Cannot find device "nvmf_tgt_br" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:50:29.472 Cannot find device "nvmf_tgt_br2" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:50:29.472 Cannot find device "nvmf_br" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:50:29.472 Cannot find device "nvmf_init_if" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:50:29.472 Cannot find device "nvmf_init_if2" 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:29.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:29.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:50:29.472 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:50:29.731 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:50:29.731 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:50:29.731 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:50:29.731 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:50:29.731 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:50:29.731 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:50:29.731 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:50:29.731 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:50:29.731 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:50:29.731 00:50:29.731 --- 10.0.0.3 ping statistics --- 00:50:29.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:29.732 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:50:29.732 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:50:29.732 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:50:29.732 00:50:29.732 --- 10.0.0.4 ping statistics --- 00:50:29.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:29.732 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:50:29.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:29.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:50:29.732 00:50:29.732 --- 10.0.0.1 ping statistics --- 00:50:29.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:29.732 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:50:29.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:29.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:50:29.732 00:50:29.732 --- 10.0.0.2 ping statistics --- 00:50:29.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:29.732 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76438 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76438 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76438 ']' 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:29.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:29.732 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:50:29.732 [2024-12-09 10:02:37.197381] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:50:29.732 [2024-12-09 10:02:37.197465] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:29.991 [2024-12-09 10:02:37.351444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:50:29.991 [2024-12-09 10:02:37.390309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:29.991 [2024-12-09 10:02:37.390377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:29.991 [2024-12-09 10:02:37.390391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:29.991 [2024-12-09 10:02:37.390401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:29.991 [2024-12-09 10:02:37.390409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:29.991 [2024-12-09 10:02:37.391293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:29.991 [2024-12-09 10:02:37.391314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:29.991 [2024-12-09 10:02:37.425312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:50:29.991 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:29.991 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:50:29.991 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:50:29.991 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:50:29.991 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:50:30.250 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:30.250 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76438 00:50:30.250 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:50:30.508 [2024-12-09 10:02:37.808908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:30.508 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:50:30.766 Malloc0 00:50:30.766 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:50:31.025 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:50:31.284 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:50:31.543 [2024-12-09 10:02:39.024306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:50:31.801 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:50:32.060 [2024-12-09 10:02:39.328456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76486 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76486 /var/tmp/bdevperf.sock 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76486 ']' 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:32.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:32.060 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:50:32.319 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:32.319 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:50:32.319 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:50:32.577 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:50:32.834 Nvme0n1 00:50:32.835 10:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:50:33.401 Nvme0n1 00:50:33.402 10:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:50:33.402 10:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:50:35.304 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:50:35.304 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:50:35.623 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:50:35.898 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:50:37.274 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:50:37.274 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:50:37.274 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:37.274 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:37.274 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:37.274 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:50:37.274 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:37.274 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:37.533 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:37.533 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:37.533 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:37.533 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:37.791 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:37.791 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:37.791 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:37.791 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:38.049 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:38.049 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:50:38.049 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:38.049 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:38.307 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:38.307 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:50:38.307 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:38.307 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:38.566 10:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:38.827 10:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:50:38.827 10:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:50:39.086 10:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:50:39.345 10:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:50:40.280 10:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:50:40.281 10:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:50:40.281 10:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:40.281 10:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:40.539 10:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:40.539 10:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:50:40.539 10:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:40.539 10:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:40.798 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:40.798 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:40.798 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:40.798 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:41.365 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:41.365 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:41.365 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:41.365 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:41.623 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:41.623 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:50:41.623 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:41.623 10:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:41.882 10:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:41.882 10:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:50:41.882 10:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:41.882 10:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:42.141 10:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:42.141 10:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:50:42.141 10:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:50:42.400 10:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:50:42.659 10:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:50:44.036 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:50:44.036 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:50:44.036 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:44.036 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:44.036 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:44.036 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:50:44.036 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:44.036 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:44.295 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:44.295 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:44.295 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:44.295 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:44.553 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:44.553 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:44.553 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:44.553 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:44.812 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:44.812 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:50:44.812 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:44.812 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:45.070 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:45.070 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:50:45.070 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:45.070 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:45.638 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:45.638 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:50:45.638 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:50:45.897 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:50:46.155 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:50:47.092 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:50:47.092 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:50:47.092 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:47.092 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:47.351 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:47.351 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:50:47.351 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:47.351 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:47.610 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:47.610 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:47.610 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:47.610 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:47.867 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:47.867 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:47.867 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:47.867 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:48.124 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:48.124 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:50:48.124 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:48.124 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:48.382 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:48.382 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:50:48.382 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:48.382 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:48.641 10:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:48.641 10:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:50:48.641 10:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:50:48.900 10:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:50:49.159 10:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:50:50.534 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:50:50.534 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:50:50.534 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:50.534 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:50.534 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:50.534 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:50:50.534 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:50.534 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:50.792 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:50.792 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:50.792 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:50.792 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:51.358 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:51.358 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:51.358 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:51.358 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:51.358 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:51.358 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:50:51.358 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:51.358 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:51.615 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:51.615 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:50:51.615 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:51.615 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:51.873 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:51.873 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:50:51.873 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:50:52.439 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:50:52.439 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:50:53.812 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:50:53.812 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:50:53.812 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:53.812 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:53.812 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:53.812 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:50:53.812 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:53.812 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:54.070 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:54.070 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:54.070 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:54.070 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:54.350 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:54.350 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:54.350 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:54.350 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:54.629 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:54.629 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:50:54.629 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:54.629 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:54.887 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:54.887 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:50:54.887 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:54.887 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:55.453 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:55.453 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:50:55.712 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:50:55.712 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:50:55.970 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:50:56.228 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:50:57.163 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:50:57.163 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:50:57.163 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:57.163 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:57.422 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:57.422 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:50:57.422 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:57.422 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:57.991 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:57.991 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:57.991 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:57.991 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:58.250 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:58.250 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:58.250 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:58.250 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:58.508 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:58.508 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:50:58.508 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:58.508 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:58.767 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:58.767 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:50:58.767 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:58.767 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:59.025 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:59.025 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:50:59.025 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:50:59.284 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:50:59.542 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:51:00.479 10:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:51:00.479 10:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:51:00.479 10:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:00.479 10:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:01.045 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:01.045 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:51:01.046 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:01.046 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:01.305 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:01.305 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:01.305 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:01.305 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:01.563 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:01.563 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:01.563 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:01.563 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:01.822 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:01.822 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:01.822 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:01.822 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:02.079 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:02.080 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:51:02.080 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:02.080 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:02.338 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:02.338 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:51:02.338 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:51:02.608 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:51:02.901 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:51:04.296 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:51:04.296 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:51:04.296 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:04.296 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:04.296 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:04.296 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:51:04.296 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:04.296 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:04.554 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:04.554 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:04.554 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:04.554 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:04.812 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:04.812 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:04.813 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:04.813 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:05.071 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:05.071 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:05.071 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:05.071 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:05.640 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:05.640 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:51:05.640 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:05.640 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:05.898 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:05.898 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:51:05.898 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:51:06.156 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:51:06.415 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:51:07.349 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:51:07.349 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:51:07.349 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:07.349 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:07.607 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:07.607 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:51:07.607 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:07.607 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:07.865 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:07.865 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:07.865 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:07.865 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:08.123 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:08.123 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:08.123 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:08.123 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:08.381 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:08.381 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:08.381 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:08.381 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:08.640 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:08.640 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:51:08.640 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:08.640 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76486 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76486 ']' 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76486 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76486 00:51:09.215 killing process with pid 76486 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76486' 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76486 00:51:09.215 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76486 00:51:09.215 { 00:51:09.215 "results": [ 00:51:09.215 { 00:51:09.215 "job": "Nvme0n1", 00:51:09.215 "core_mask": "0x4", 00:51:09.215 "workload": "verify", 00:51:09.215 "status": "terminated", 00:51:09.215 "verify_range": { 00:51:09.215 "start": 0, 00:51:09.215 "length": 16384 00:51:09.215 }, 00:51:09.215 "queue_depth": 128, 00:51:09.215 "io_size": 4096, 00:51:09.215 "runtime": 35.607568, 00:51:09.215 "iops": 8384.987146552665, 00:51:09.215 "mibps": 32.75385604122135, 00:51:09.215 "io_failed": 0, 00:51:09.215 "io_timeout": 0, 00:51:09.215 "avg_latency_us": 15234.985388716297, 00:51:09.216 "min_latency_us": 224.3490909090909, 00:51:09.216 "max_latency_us": 4087539.898181818 00:51:09.216 } 00:51:09.216 ], 00:51:09.216 "core_count": 1 00:51:09.216 } 00:51:09.216 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76486 00:51:09.216 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:51:09.216 [2024-12-09 10:02:39.418122] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:51:09.216 [2024-12-09 10:02:39.418227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76486 ] 00:51:09.216 [2024-12-09 10:02:39.577693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:09.216 [2024-12-09 10:02:39.617276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:51:09.216 [2024-12-09 10:02:39.650089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:51:09.216 Running I/O for 90 seconds... 00:51:09.216 6938.00 IOPS, 27.10 MiB/s [2024-12-09T10:03:16.718Z] 7053.00 IOPS, 27.55 MiB/s [2024-12-09T10:03:16.718Z] 6963.33 IOPS, 27.20 MiB/s [2024-12-09T10:03:16.718Z] 7014.25 IOPS, 27.40 MiB/s [2024-12-09T10:03:16.718Z] 7019.60 IOPS, 27.42 MiB/s [2024-12-09T10:03:16.718Z] 7218.00 IOPS, 28.20 MiB/s [2024-12-09T10:03:16.718Z] 7548.86 IOPS, 29.49 MiB/s [2024-12-09T10:03:16.718Z] 7761.00 IOPS, 30.32 MiB/s [2024-12-09T10:03:16.718Z] 7928.00 IOPS, 30.97 MiB/s [2024-12-09T10:03:16.718Z] 8082.50 IOPS, 31.57 MiB/s [2024-12-09T10:03:16.718Z] 8194.55 IOPS, 32.01 MiB/s [2024-12-09T10:03:16.718Z] 8275.58 IOPS, 32.33 MiB/s [2024-12-09T10:03:16.718Z] 8350.38 IOPS, 32.62 MiB/s [2024-12-09T10:03:16.718Z] 8423.07 IOPS, 32.90 MiB/s [2024-12-09T10:03:16.718Z] 8472.20 IOPS, 33.09 MiB/s [2024-12-09T10:03:16.718Z] [2024-12-09 10:02:56.358067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.358137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.358194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.358233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.358272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.358310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.358348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.358386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.358423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.358940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.358968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.359024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.359063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.216 [2024-12-09 10:02:56.359102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.359168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.359208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.359256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.359295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.359334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.359373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.359412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:09.216 [2024-12-09 10:02:56.359435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.216 [2024-12-09 10:02:56.359451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.359490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.359538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.359579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.359618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.359657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.359695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.359734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.359772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.359811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.359849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.359887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.359925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.359947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.359978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.360158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.360197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.360235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.360274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.360321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.360359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.360396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.217 [2024-12-09 10:02:56.360436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.360952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.360982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.361007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.361023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.361053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.361071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:51:09.217 [2024-12-09 10:02:56.361093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.217 [2024-12-09 10:02:56.361109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.361147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.361185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.361224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.361263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.361301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.361350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.361389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.361427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.361973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.361998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.362014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.362061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.218 [2024-12-09 10:02:56.362101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.362667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.362683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:09.218 [2024-12-09 10:02:56.364021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.218 [2024-12-09 10:02:56.364052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.364976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.364996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.365035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.219 [2024-12-09 10:02:56.365448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.365953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.365986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.366010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.366026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.366049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.366064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:09.219 [2024-12-09 10:02:56.366086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.219 [2024-12-09 10:02:56.366105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.366486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.366524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.366563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.366601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.366640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.366679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.366717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.366758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.366937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.366972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.387799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.387878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.387907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.387940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.387982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.388049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.388121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.388172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.388243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.388305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.388403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.388469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.388531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.220 [2024-12-09 10:02:56.388594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.388656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.220 [2024-12-09 10:02:56.388718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:09.220 [2024-12-09 10:02:56.388754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.388782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.388819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.388844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.388880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.388905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.388941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.388966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.389630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.389691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.389753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.389826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.389886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.389923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.389947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.390043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.390105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.390167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.390926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.390983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.391012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.391050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.391075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.391111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.391136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.391173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.221 [2024-12-09 10:02:56.391198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.391234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.391259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:09.221 [2024-12-09 10:02:56.391295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.221 [2024-12-09 10:02:56.391320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.391942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.391984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.392049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.392128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.392189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.392252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.392925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.392950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.393658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.393684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.396295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.396347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.396401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.222 [2024-12-09 10:02:56.396430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.396468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.396494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:51:09.222 [2024-12-09 10:02:56.396531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.222 [2024-12-09 10:02:56.396557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.396593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.396643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.396683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.396709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.396746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.396771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.396807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.396832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.396869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.396895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.396931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.396977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.397049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.397112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.397173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.397236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.397298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.397359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.397421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.397498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.397559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.397621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.397683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.397744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.397807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.397902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.397939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.397984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.398049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.398120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.398181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.398242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.398330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.398392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.398453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.398515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.398576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.398644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.398715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.398778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.398839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.398904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.398940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.398982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.399021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.399047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.399084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.223 [2024-12-09 10:02:56.399141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.399182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.399208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:51:09.223 [2024-12-09 10:02:56.399245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.223 [2024-12-09 10:02:56.399270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.399332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.399394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.399455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.399517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.399578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.399640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.399702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.399763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.399825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.399897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.399935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.399981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.400683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.400732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.400769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.400807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.400845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.400883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.400921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.400947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.400963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.401000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.224 [2024-12-09 10:02:56.401017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.401039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.401055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.401077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.401093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:09.224 [2024-12-09 10:02:56.401119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.224 [2024-12-09 10:02:56.401135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.401639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.401684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.401723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.401761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.401799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.401838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.401875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.401913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.401951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.401987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.402004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.402042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.402080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.402117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.402166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.402221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.402262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.225 [2024-12-09 10:02:56.402298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:09.225 [2024-12-09 10:02:56.402736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.225 [2024-12-09 10:02:56.402751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.402773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.402789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.402811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.402827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.402849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.402865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.417123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.417184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.417211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.417227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.417249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.417264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.417285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.417301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.417339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.417355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.417378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.417394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.417417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.417432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:02:56.417958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:02:56.418019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:09.226 8221.19 IOPS, 32.11 MiB/s [2024-12-09T10:03:16.728Z] 7737.59 IOPS, 30.22 MiB/s [2024-12-09T10:03:16.728Z] 7307.72 IOPS, 28.55 MiB/s [2024-12-09T10:03:16.728Z] 6923.11 IOPS, 27.04 MiB/s [2024-12-09T10:03:16.728Z] 6781.00 IOPS, 26.49 MiB/s [2024-12-09T10:03:16.728Z] 6896.19 IOPS, 26.94 MiB/s [2024-12-09T10:03:16.728Z] 6998.18 IOPS, 27.34 MiB/s [2024-12-09T10:03:16.728Z] 7175.09 IOPS, 28.03 MiB/s [2024-12-09T10:03:16.728Z] 7393.04 IOPS, 28.88 MiB/s [2024-12-09T10:03:16.728Z] 7580.88 IOPS, 29.61 MiB/s [2024-12-09T10:03:16.728Z] 7734.35 IOPS, 30.21 MiB/s [2024-12-09T10:03:16.728Z] 7788.00 IOPS, 30.42 MiB/s [2024-12-09T10:03:16.728Z] 7835.61 IOPS, 30.61 MiB/s [2024-12-09T10:03:16.728Z] 7880.72 IOPS, 30.78 MiB/s [2024-12-09T10:03:16.728Z] 7975.40 IOPS, 31.15 MiB/s [2024-12-09T10:03:16.728Z] 8099.58 IOPS, 31.64 MiB/s [2024-12-09T10:03:16.728Z] 8227.72 IOPS, 32.14 MiB/s [2024-12-09T10:03:16.728Z] [2024-12-09 10:03:13.674768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.674836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.674894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.674916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.674940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.674968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.674996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.226 [2024-12-09 10:03:13.675676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.226 [2024-12-09 10:03:13.675836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:09.226 [2024-12-09 10:03:13.675859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.675874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.675896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.675912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.675934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.675950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.676009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.676030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.676054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.676081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.676107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.676123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.676145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.676161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.676183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.676199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.676222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.676238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.676260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.676286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.676310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.676326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.677551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.677598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.677637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.677675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.677713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.677751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.677789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.677827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.677865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.677902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.677953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.677995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:09.227 [2024-12-09 10:03:13.678012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.678039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.678054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.678077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.678094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.678134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.678154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.678178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.678194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.678216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.678232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.678254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.678270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:09.227 [2024-12-09 10:03:13.678292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:09.227 [2024-12-09 10:03:13.678308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:09.227 8330.21 IOPS, 32.54 MiB/s [2024-12-09T10:03:16.729Z] 8353.68 IOPS, 32.63 MiB/s [2024-12-09T10:03:16.729Z] 8375.11 IOPS, 32.72 MiB/s [2024-12-09T10:03:16.729Z] Received shutdown signal, test time was about 35.608412 seconds 00:51:09.227 00:51:09.227 Latency(us) 00:51:09.227 [2024-12-09T10:03:16.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:09.227 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:51:09.227 Verification LBA range: start 0x0 length 0x4000 00:51:09.227 Nvme0n1 : 35.61 8384.99 32.75 0.00 0.00 15234.99 224.35 4087539.90 00:51:09.227 [2024-12-09T10:03:16.729Z] =================================================================================================================== 00:51:09.227 [2024-12-09T10:03:16.729Z] Total : 8384.99 32.75 0.00 0.00 15234.99 224.35 4087539.90 00:51:09.227 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:51:09.487 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:51:09.487 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:51:09.487 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:51:09.487 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:51:09.487 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:51:09.487 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:51:09.487 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:51:09.487 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:51:09.487 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:51:09.487 rmmod nvme_tcp 00:51:09.745 rmmod nvme_fabrics 00:51:09.745 rmmod nvme_keyring 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76438 ']' 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76438 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76438 ']' 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76438 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76438 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:09.745 killing process with pid 76438 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76438' 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76438 00:51:09.745 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76438 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:51:10.004 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:51:10.005 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:51:10.005 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:51:10.005 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:51:10.005 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:10.005 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:51:10.264 00:51:10.264 real 0m41.060s 00:51:10.264 user 2m13.982s 00:51:10.264 sys 0m11.713s 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:10.264 ************************************ 00:51:10.264 END TEST nvmf_host_multipath_status 00:51:10.264 ************************************ 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:51:10.264 ************************************ 00:51:10.264 START TEST nvmf_discovery_remove_ifc 00:51:10.264 ************************************ 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:51:10.264 * Looking for test storage... 00:51:10.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:51:10.264 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:10.524 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:51:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:10.525 --rc genhtml_branch_coverage=1 00:51:10.525 --rc genhtml_function_coverage=1 00:51:10.525 --rc genhtml_legend=1 00:51:10.525 --rc geninfo_all_blocks=1 00:51:10.525 --rc geninfo_unexecuted_blocks=1 00:51:10.525 00:51:10.525 ' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:51:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:10.525 --rc genhtml_branch_coverage=1 00:51:10.525 --rc genhtml_function_coverage=1 00:51:10.525 --rc genhtml_legend=1 00:51:10.525 --rc geninfo_all_blocks=1 00:51:10.525 --rc geninfo_unexecuted_blocks=1 00:51:10.525 00:51:10.525 ' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:51:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:10.525 --rc genhtml_branch_coverage=1 00:51:10.525 --rc genhtml_function_coverage=1 00:51:10.525 --rc genhtml_legend=1 00:51:10.525 --rc geninfo_all_blocks=1 00:51:10.525 --rc geninfo_unexecuted_blocks=1 00:51:10.525 00:51:10.525 ' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:51:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:10.525 --rc genhtml_branch_coverage=1 00:51:10.525 --rc genhtml_function_coverage=1 00:51:10.525 --rc genhtml_legend=1 00:51:10.525 --rc geninfo_all_blocks=1 00:51:10.525 --rc geninfo_unexecuted_blocks=1 00:51:10.525 00:51:10.525 ' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:51:10.525 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:51:10.525 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:51:10.526 Cannot find device "nvmf_init_br" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:51:10.526 Cannot find device "nvmf_init_br2" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:51:10.526 Cannot find device "nvmf_tgt_br" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:51:10.526 Cannot find device "nvmf_tgt_br2" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:51:10.526 Cannot find device "nvmf_init_br" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:51:10.526 Cannot find device "nvmf_init_br2" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:51:10.526 Cannot find device "nvmf_tgt_br" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:51:10.526 Cannot find device "nvmf_tgt_br2" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:51:10.526 Cannot find device "nvmf_br" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:51:10.526 Cannot find device "nvmf_init_if" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:51:10.526 Cannot find device "nvmf_init_if2" 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:10.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:10.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:10.526 10:03:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:10.526 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:51:10.785 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:10.785 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:51:10.785 00:51:10.785 --- 10.0.0.3 ping statistics --- 00:51:10.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:10.785 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:51:10.785 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:51:10.785 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:51:10.785 00:51:10.785 --- 10.0.0.4 ping statistics --- 00:51:10.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:10.785 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:51:10.785 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:10.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:10.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:51:10.785 00:51:10.785 --- 10.0.0.1 ping statistics --- 00:51:10.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:10.785 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:51:10.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:10.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:51:10.786 00:51:10.786 --- 10.0.0.2 ping statistics --- 00:51:10.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:10.786 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77342 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77342 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77342 ']' 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:10.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:10.786 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:11.044 [2024-12-09 10:03:18.310205] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:51:11.044 [2024-12-09 10:03:18.310302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:11.044 [2024-12-09 10:03:18.458910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:11.044 [2024-12-09 10:03:18.491881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:11.044 [2024-12-09 10:03:18.491951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:11.044 [2024-12-09 10:03:18.491963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:11.044 [2024-12-09 10:03:18.491982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:11.044 [2024-12-09 10:03:18.491992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:11.044 [2024-12-09 10:03:18.492318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:11.044 [2024-12-09 10:03:18.522252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:11.303 [2024-12-09 10:03:18.626846] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:11.303 [2024-12-09 10:03:18.635006] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:51:11.303 null0 00:51:11.303 [2024-12-09 10:03:18.666896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77362 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77362 /tmp/host.sock 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77362 ']' 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:11.303 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:11.303 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:11.303 [2024-12-09 10:03:18.747074] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:51:11.303 [2024-12-09 10:03:18.747160] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77362 ] 00:51:11.630 [2024-12-09 10:03:18.901464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:11.630 [2024-12-09 10:03:18.940364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:11.630 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:11.630 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:51:11.630 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:51:11.630 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:51:11.630 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:11.630 10:03:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:11.630 10:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:11.630 10:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:51:11.630 10:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:11.630 10:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:11.630 [2024-12-09 10:03:19.045726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:51:11.630 10:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:11.630 10:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:51:11.630 10:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:11.630 10:03:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:13.007 [2024-12-09 10:03:20.089886] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:51:13.007 [2024-12-09 10:03:20.089973] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:51:13.007 [2024-12-09 10:03:20.090008] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:51:13.007 [2024-12-09 10:03:20.095932] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:51:13.007 [2024-12-09 10:03:20.150351] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:51:13.007 [2024-12-09 10:03:20.151290] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2489f00:1 started. 00:51:13.007 [2024-12-09 10:03:20.153024] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:51:13.007 [2024-12-09 10:03:20.153113] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:51:13.007 [2024-12-09 10:03:20.153143] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:51:13.007 [2024-12-09 10:03:20.153160] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:51:13.007 [2024-12-09 10:03:20.153187] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:51:13.007 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:13.007 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:51:13.007 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:13.008 [2024-12-09 10:03:20.158510] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2489f00 was disconnected and freed. delete nvme_qpair. 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:13.008 10:03:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:13.944 10:03:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:14.880 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:14.880 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:14.880 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:14.880 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:14.880 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:14.880 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:14.880 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:14.880 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:15.139 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:15.139 10:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:16.074 10:03:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:17.009 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:17.009 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:17.009 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:17.009 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:17.009 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:17.009 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:17.009 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:17.267 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:17.267 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:17.267 10:03:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:18.201 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:18.201 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:18.201 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:18.201 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:18.202 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:18.202 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:18.202 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:18.202 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:18.202 [2024-12-09 10:03:25.580990] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:51:18.202 [2024-12-09 10:03:25.581063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:18.202 [2024-12-09 10:03:25.581081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:18.202 [2024-12-09 10:03:25.581094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:18.202 [2024-12-09 10:03:25.581104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:18.202 [2024-12-09 10:03:25.581114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:18.202 [2024-12-09 10:03:25.581123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:18.202 [2024-12-09 10:03:25.581133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:18.202 [2024-12-09 10:03:25.581142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:18.202 [2024-12-09 10:03:25.581152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:51:18.202 [2024-12-09 10:03:25.581161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:18.202 [2024-12-09 10:03:25.581170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465fc0 is same with the state(6) to be set 00:51:18.202 [2024-12-09 10:03:25.590987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2465fc0 (9): Bad file descriptor 00:51:18.202 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:18.202 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:18.202 [2024-12-09 10:03:25.601003] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:51:18.202 [2024-12-09 10:03:25.601040] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:51:18.202 [2024-12-09 10:03:25.601048] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:51:18.202 [2024-12-09 10:03:25.601054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:51:18.202 [2024-12-09 10:03:25.601099] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:51:19.137 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:19.137 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:19.137 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:19.137 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:19.137 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:19.137 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:19.137 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:19.396 [2024-12-09 10:03:26.657029] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:51:19.396 [2024-12-09 10:03:26.657095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2465fc0 with addr=10.0.0.3, port=4420 00:51:19.396 [2024-12-09 10:03:26.657118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465fc0 is same with the state(6) to be set 00:51:19.396 [2024-12-09 10:03:26.657182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2465fc0 (9): Bad file descriptor 00:51:19.396 [2024-12-09 10:03:26.657885] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:51:19.396 [2024-12-09 10:03:26.657945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:51:19.396 [2024-12-09 10:03:26.657964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:51:19.396 [2024-12-09 10:03:26.658015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:51:19.396 [2024-12-09 10:03:26.658034] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:51:19.396 [2024-12-09 10:03:26.658047] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:51:19.396 [2024-12-09 10:03:26.658056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:51:19.396 [2024-12-09 10:03:26.658073] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:51:19.396 [2024-12-09 10:03:26.658084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:51:19.396 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:19.396 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:19.396 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:20.338 [2024-12-09 10:03:27.658133] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:51:20.338 [2024-12-09 10:03:27.658207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:51:20.338 [2024-12-09 10:03:27.658239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:51:20.338 [2024-12-09 10:03:27.658267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:51:20.338 [2024-12-09 10:03:27.658277] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:51:20.338 [2024-12-09 10:03:27.658287] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:51:20.338 [2024-12-09 10:03:27.658294] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:51:20.338 [2024-12-09 10:03:27.658299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:51:20.338 [2024-12-09 10:03:27.658334] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:51:20.338 [2024-12-09 10:03:27.658408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:20.339 [2024-12-09 10:03:27.658424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:20.339 [2024-12-09 10:03:27.658437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:20.339 [2024-12-09 10:03:27.658447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:20.339 [2024-12-09 10:03:27.658456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:20.339 [2024-12-09 10:03:27.658465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:20.339 [2024-12-09 10:03:27.658475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:20.339 [2024-12-09 10:03:27.658483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:20.339 [2024-12-09 10:03:27.658493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:51:20.339 [2024-12-09 10:03:27.658502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:20.339 [2024-12-09 10:03:27.658528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:51:20.339 [2024-12-09 10:03:27.658569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f1a20 (9): Bad file descriptor 00:51:20.339 [2024-12-09 10:03:27.659562] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:51:20.339 [2024-12-09 10:03:27.659606] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:51:20.339 10:03:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:51:21.715 10:03:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:22.283 [2024-12-09 10:03:29.664666] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:51:22.283 [2024-12-09 10:03:29.664699] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:51:22.283 [2024-12-09 10:03:29.664735] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:51:22.283 [2024-12-09 10:03:29.670736] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:51:22.283 [2024-12-09 10:03:29.725110] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:51:22.283 [2024-12-09 10:03:29.725824] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x24921d0:1 started. 00:51:22.283 [2024-12-09 10:03:29.727081] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:51:22.283 [2024-12-09 10:03:29.727131] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:51:22.283 [2024-12-09 10:03:29.727155] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:51:22.283 [2024-12-09 10:03:29.727171] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:51:22.283 [2024-12-09 10:03:29.727181] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:51:22.283 [2024-12-09 10:03:29.733416] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x24921d0 was disconnected and freed. delete nvme_qpair. 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77362 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77362 ']' 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77362 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77362 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:22.542 killing process with pid 77362 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77362' 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77362 00:51:22.542 10:03:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77362 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:51:22.802 rmmod nvme_tcp 00:51:22.802 rmmod nvme_fabrics 00:51:22.802 rmmod nvme_keyring 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77342 ']' 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77342 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77342 ']' 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77342 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77342 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:51:22.802 killing process with pid 77342 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77342' 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77342 00:51:22.802 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77342 00:51:23.060 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:51:23.060 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:51:23.060 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:51:23.061 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:51:23.319 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:51:23.319 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:51:23.319 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:23.319 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:23.319 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:51:23.319 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:51:23.320 00:51:23.320 real 0m13.116s 00:51:23.320 user 0m22.344s 00:51:23.320 sys 0m2.340s 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:23.320 ************************************ 00:51:23.320 END TEST nvmf_discovery_remove_ifc 00:51:23.320 ************************************ 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:51:23.320 ************************************ 00:51:23.320 START TEST nvmf_identify_kernel_target 00:51:23.320 ************************************ 00:51:23.320 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:51:23.579 * Looking for test storage... 00:51:23.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:51:23.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:23.579 --rc genhtml_branch_coverage=1 00:51:23.579 --rc genhtml_function_coverage=1 00:51:23.579 --rc genhtml_legend=1 00:51:23.579 --rc geninfo_all_blocks=1 00:51:23.579 --rc geninfo_unexecuted_blocks=1 00:51:23.579 00:51:23.579 ' 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:51:23.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:23.579 --rc genhtml_branch_coverage=1 00:51:23.579 --rc genhtml_function_coverage=1 00:51:23.579 --rc genhtml_legend=1 00:51:23.579 --rc geninfo_all_blocks=1 00:51:23.579 --rc geninfo_unexecuted_blocks=1 00:51:23.579 00:51:23.579 ' 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:51:23.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:23.579 --rc genhtml_branch_coverage=1 00:51:23.579 --rc genhtml_function_coverage=1 00:51:23.579 --rc genhtml_legend=1 00:51:23.579 --rc geninfo_all_blocks=1 00:51:23.579 --rc geninfo_unexecuted_blocks=1 00:51:23.579 00:51:23.579 ' 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:51:23.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:23.579 --rc genhtml_branch_coverage=1 00:51:23.579 --rc genhtml_function_coverage=1 00:51:23.579 --rc genhtml_legend=1 00:51:23.579 --rc geninfo_all_blocks=1 00:51:23.579 --rc geninfo_unexecuted_blocks=1 00:51:23.579 00:51:23.579 ' 00:51:23.579 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:51:23.580 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:51:23.580 Cannot find device "nvmf_init_br" 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:51:23.580 10:03:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:51:23.580 Cannot find device "nvmf_init_br2" 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:51:23.580 Cannot find device "nvmf_tgt_br" 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:51:23.580 Cannot find device "nvmf_tgt_br2" 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:51:23.580 Cannot find device "nvmf_init_br" 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:51:23.580 Cannot find device "nvmf_init_br2" 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:51:23.580 Cannot find device "nvmf_tgt_br" 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:51:23.580 Cannot find device "nvmf_tgt_br2" 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:51:23.580 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:51:23.840 Cannot find device "nvmf_br" 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:51:23.840 Cannot find device "nvmf_init_if" 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:51:23.840 Cannot find device "nvmf_init_if2" 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:23.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:23.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:51:23.840 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:51:24.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:24.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:51:24.099 00:51:24.099 --- 10.0.0.3 ping statistics --- 00:51:24.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:24.099 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:51:24.099 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:51:24.099 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:51:24.099 00:51:24.099 --- 10.0.0.4 ping statistics --- 00:51:24.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:24.099 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:24.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:24.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:51:24.099 00:51:24.099 --- 10.0.0.1 ping statistics --- 00:51:24.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:24.099 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:51:24.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:24.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:51:24.099 00:51:24.099 --- 10.0.0.2 ping statistics --- 00:51:24.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:24.099 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:51:24.099 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:51:24.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:24.358 Waiting for block devices as requested 00:51:24.358 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:51:24.616 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:51:24.616 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:51:24.616 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:51:24.616 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:51:24.616 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:51:24.616 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:51:24.616 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:24.616 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:51:24.616 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:51:24.616 10:03:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:51:24.616 No valid GPT data, bailing 00:51:24.616 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:51:24.616 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:51:24.616 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:51:24.616 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:51:24.616 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:51:24.617 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:51:24.617 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:51:24.617 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:51:24.617 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:51:24.617 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:24.617 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:51:24.617 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:51:24.617 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:51:24.617 No valid GPT data, bailing 00:51:24.617 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:51:24.876 No valid GPT data, bailing 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:51:24.876 No valid GPT data, bailing 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:51:24.876 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid=eacce601-ab17-4447-87df-663b0f4f5455 -a 10.0.0.1 -t tcp -s 4420 00:51:24.877 00:51:24.877 Discovery Log Number of Records 2, Generation counter 2 00:51:24.877 =====Discovery Log Entry 0====== 00:51:24.877 trtype: tcp 00:51:24.877 adrfam: ipv4 00:51:24.877 subtype: current discovery subsystem 00:51:24.877 treq: not specified, sq flow control disable supported 00:51:24.877 portid: 1 00:51:24.877 trsvcid: 4420 00:51:24.877 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:51:24.877 traddr: 10.0.0.1 00:51:24.877 eflags: none 00:51:24.877 sectype: none 00:51:24.877 =====Discovery Log Entry 1====== 00:51:24.877 trtype: tcp 00:51:24.877 adrfam: ipv4 00:51:24.877 subtype: nvme subsystem 00:51:24.877 treq: not specified, sq flow control disable supported 00:51:24.877 portid: 1 00:51:24.877 trsvcid: 4420 00:51:24.877 subnqn: nqn.2016-06.io.spdk:testnqn 00:51:24.877 traddr: 10.0.0.1 00:51:24.877 eflags: none 00:51:24.877 sectype: none 00:51:24.877 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:51:24.877 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:51:25.136 ===================================================== 00:51:25.136 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:51:25.136 ===================================================== 00:51:25.136 Controller Capabilities/Features 00:51:25.136 ================================ 00:51:25.136 Vendor ID: 0000 00:51:25.136 Subsystem Vendor ID: 0000 00:51:25.136 Serial Number: 40019907cb2ca7e2f728 00:51:25.136 Model Number: Linux 00:51:25.136 Firmware Version: 6.8.9-20 00:51:25.136 Recommended Arb Burst: 0 00:51:25.136 IEEE OUI Identifier: 00 00 00 00:51:25.136 Multi-path I/O 00:51:25.136 May have multiple subsystem ports: No 00:51:25.136 May have multiple controllers: No 00:51:25.136 Associated with SR-IOV VF: No 00:51:25.136 Max Data Transfer Size: Unlimited 00:51:25.136 Max Number of Namespaces: 0 00:51:25.136 Max Number of I/O Queues: 1024 00:51:25.136 NVMe Specification Version (VS): 1.3 00:51:25.136 NVMe Specification Version (Identify): 1.3 00:51:25.136 Maximum Queue Entries: 1024 00:51:25.136 Contiguous Queues Required: No 00:51:25.136 Arbitration Mechanisms Supported 00:51:25.136 Weighted Round Robin: Not Supported 00:51:25.136 Vendor Specific: Not Supported 00:51:25.136 Reset Timeout: 7500 ms 00:51:25.136 Doorbell Stride: 4 bytes 00:51:25.136 NVM Subsystem Reset: Not Supported 00:51:25.136 Command Sets Supported 00:51:25.136 NVM Command Set: Supported 00:51:25.136 Boot Partition: Not Supported 00:51:25.136 Memory Page Size Minimum: 4096 bytes 00:51:25.136 Memory Page Size Maximum: 4096 bytes 00:51:25.136 Persistent Memory Region: Not Supported 00:51:25.136 Optional Asynchronous Events Supported 00:51:25.136 Namespace Attribute Notices: Not Supported 00:51:25.136 Firmware Activation Notices: Not Supported 00:51:25.136 ANA Change Notices: Not Supported 00:51:25.136 PLE Aggregate Log Change Notices: Not Supported 00:51:25.136 LBA Status Info Alert Notices: Not Supported 00:51:25.136 EGE Aggregate Log Change Notices: Not Supported 00:51:25.136 Normal NVM Subsystem Shutdown event: Not Supported 00:51:25.136 Zone Descriptor Change Notices: Not Supported 00:51:25.136 Discovery Log Change Notices: Supported 00:51:25.136 Controller Attributes 00:51:25.136 128-bit Host Identifier: Not Supported 00:51:25.136 Non-Operational Permissive Mode: Not Supported 00:51:25.136 NVM Sets: Not Supported 00:51:25.136 Read Recovery Levels: Not Supported 00:51:25.136 Endurance Groups: Not Supported 00:51:25.136 Predictable Latency Mode: Not Supported 00:51:25.136 Traffic Based Keep ALive: Not Supported 00:51:25.136 Namespace Granularity: Not Supported 00:51:25.137 SQ Associations: Not Supported 00:51:25.137 UUID List: Not Supported 00:51:25.137 Multi-Domain Subsystem: Not Supported 00:51:25.137 Fixed Capacity Management: Not Supported 00:51:25.137 Variable Capacity Management: Not Supported 00:51:25.137 Delete Endurance Group: Not Supported 00:51:25.137 Delete NVM Set: Not Supported 00:51:25.137 Extended LBA Formats Supported: Not Supported 00:51:25.137 Flexible Data Placement Supported: Not Supported 00:51:25.137 00:51:25.137 Controller Memory Buffer Support 00:51:25.137 ================================ 00:51:25.137 Supported: No 00:51:25.137 00:51:25.137 Persistent Memory Region Support 00:51:25.137 ================================ 00:51:25.137 Supported: No 00:51:25.137 00:51:25.137 Admin Command Set Attributes 00:51:25.137 ============================ 00:51:25.137 Security Send/Receive: Not Supported 00:51:25.137 Format NVM: Not Supported 00:51:25.137 Firmware Activate/Download: Not Supported 00:51:25.137 Namespace Management: Not Supported 00:51:25.137 Device Self-Test: Not Supported 00:51:25.137 Directives: Not Supported 00:51:25.137 NVMe-MI: Not Supported 00:51:25.137 Virtualization Management: Not Supported 00:51:25.137 Doorbell Buffer Config: Not Supported 00:51:25.137 Get LBA Status Capability: Not Supported 00:51:25.137 Command & Feature Lockdown Capability: Not Supported 00:51:25.137 Abort Command Limit: 1 00:51:25.137 Async Event Request Limit: 1 00:51:25.137 Number of Firmware Slots: N/A 00:51:25.137 Firmware Slot 1 Read-Only: N/A 00:51:25.137 Firmware Activation Without Reset: N/A 00:51:25.137 Multiple Update Detection Support: N/A 00:51:25.137 Firmware Update Granularity: No Information Provided 00:51:25.137 Per-Namespace SMART Log: No 00:51:25.137 Asymmetric Namespace Access Log Page: Not Supported 00:51:25.137 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:51:25.137 Command Effects Log Page: Not Supported 00:51:25.137 Get Log Page Extended Data: Supported 00:51:25.137 Telemetry Log Pages: Not Supported 00:51:25.137 Persistent Event Log Pages: Not Supported 00:51:25.137 Supported Log Pages Log Page: May Support 00:51:25.137 Commands Supported & Effects Log Page: Not Supported 00:51:25.137 Feature Identifiers & Effects Log Page:May Support 00:51:25.137 NVMe-MI Commands & Effects Log Page: May Support 00:51:25.137 Data Area 4 for Telemetry Log: Not Supported 00:51:25.137 Error Log Page Entries Supported: 1 00:51:25.137 Keep Alive: Not Supported 00:51:25.137 00:51:25.137 NVM Command Set Attributes 00:51:25.137 ========================== 00:51:25.137 Submission Queue Entry Size 00:51:25.137 Max: 1 00:51:25.137 Min: 1 00:51:25.137 Completion Queue Entry Size 00:51:25.137 Max: 1 00:51:25.137 Min: 1 00:51:25.137 Number of Namespaces: 0 00:51:25.137 Compare Command: Not Supported 00:51:25.137 Write Uncorrectable Command: Not Supported 00:51:25.137 Dataset Management Command: Not Supported 00:51:25.137 Write Zeroes Command: Not Supported 00:51:25.137 Set Features Save Field: Not Supported 00:51:25.137 Reservations: Not Supported 00:51:25.137 Timestamp: Not Supported 00:51:25.137 Copy: Not Supported 00:51:25.137 Volatile Write Cache: Not Present 00:51:25.137 Atomic Write Unit (Normal): 1 00:51:25.137 Atomic Write Unit (PFail): 1 00:51:25.137 Atomic Compare & Write Unit: 1 00:51:25.137 Fused Compare & Write: Not Supported 00:51:25.137 Scatter-Gather List 00:51:25.137 SGL Command Set: Supported 00:51:25.137 SGL Keyed: Not Supported 00:51:25.137 SGL Bit Bucket Descriptor: Not Supported 00:51:25.137 SGL Metadata Pointer: Not Supported 00:51:25.137 Oversized SGL: Not Supported 00:51:25.137 SGL Metadata Address: Not Supported 00:51:25.137 SGL Offset: Supported 00:51:25.137 Transport SGL Data Block: Not Supported 00:51:25.137 Replay Protected Memory Block: Not Supported 00:51:25.137 00:51:25.137 Firmware Slot Information 00:51:25.137 ========================= 00:51:25.137 Active slot: 0 00:51:25.137 00:51:25.137 00:51:25.137 Error Log 00:51:25.137 ========= 00:51:25.137 00:51:25.137 Active Namespaces 00:51:25.137 ================= 00:51:25.137 Discovery Log Page 00:51:25.137 ================== 00:51:25.137 Generation Counter: 2 00:51:25.137 Number of Records: 2 00:51:25.137 Record Format: 0 00:51:25.137 00:51:25.137 Discovery Log Entry 0 00:51:25.137 ---------------------- 00:51:25.137 Transport Type: 3 (TCP) 00:51:25.137 Address Family: 1 (IPv4) 00:51:25.137 Subsystem Type: 3 (Current Discovery Subsystem) 00:51:25.137 Entry Flags: 00:51:25.137 Duplicate Returned Information: 0 00:51:25.137 Explicit Persistent Connection Support for Discovery: 0 00:51:25.137 Transport Requirements: 00:51:25.137 Secure Channel: Not Specified 00:51:25.137 Port ID: 1 (0x0001) 00:51:25.137 Controller ID: 65535 (0xffff) 00:51:25.137 Admin Max SQ Size: 32 00:51:25.137 Transport Service Identifier: 4420 00:51:25.137 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:51:25.137 Transport Address: 10.0.0.1 00:51:25.137 Discovery Log Entry 1 00:51:25.137 ---------------------- 00:51:25.137 Transport Type: 3 (TCP) 00:51:25.137 Address Family: 1 (IPv4) 00:51:25.137 Subsystem Type: 2 (NVM Subsystem) 00:51:25.137 Entry Flags: 00:51:25.137 Duplicate Returned Information: 0 00:51:25.137 Explicit Persistent Connection Support for Discovery: 0 00:51:25.137 Transport Requirements: 00:51:25.137 Secure Channel: Not Specified 00:51:25.137 Port ID: 1 (0x0001) 00:51:25.137 Controller ID: 65535 (0xffff) 00:51:25.137 Admin Max SQ Size: 32 00:51:25.137 Transport Service Identifier: 4420 00:51:25.137 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:51:25.137 Transport Address: 10.0.0.1 00:51:25.137 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:51:25.397 get_feature(0x01) failed 00:51:25.397 get_feature(0x02) failed 00:51:25.397 get_feature(0x04) failed 00:51:25.397 ===================================================== 00:51:25.397 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:51:25.397 ===================================================== 00:51:25.397 Controller Capabilities/Features 00:51:25.397 ================================ 00:51:25.397 Vendor ID: 0000 00:51:25.397 Subsystem Vendor ID: 0000 00:51:25.397 Serial Number: 4334c1cda355f35e7ad1 00:51:25.397 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:51:25.397 Firmware Version: 6.8.9-20 00:51:25.397 Recommended Arb Burst: 6 00:51:25.397 IEEE OUI Identifier: 00 00 00 00:51:25.397 Multi-path I/O 00:51:25.397 May have multiple subsystem ports: Yes 00:51:25.397 May have multiple controllers: Yes 00:51:25.397 Associated with SR-IOV VF: No 00:51:25.397 Max Data Transfer Size: Unlimited 00:51:25.397 Max Number of Namespaces: 1024 00:51:25.397 Max Number of I/O Queues: 128 00:51:25.397 NVMe Specification Version (VS): 1.3 00:51:25.397 NVMe Specification Version (Identify): 1.3 00:51:25.397 Maximum Queue Entries: 1024 00:51:25.397 Contiguous Queues Required: No 00:51:25.397 Arbitration Mechanisms Supported 00:51:25.397 Weighted Round Robin: Not Supported 00:51:25.397 Vendor Specific: Not Supported 00:51:25.397 Reset Timeout: 7500 ms 00:51:25.397 Doorbell Stride: 4 bytes 00:51:25.397 NVM Subsystem Reset: Not Supported 00:51:25.397 Command Sets Supported 00:51:25.397 NVM Command Set: Supported 00:51:25.397 Boot Partition: Not Supported 00:51:25.397 Memory Page Size Minimum: 4096 bytes 00:51:25.397 Memory Page Size Maximum: 4096 bytes 00:51:25.397 Persistent Memory Region: Not Supported 00:51:25.397 Optional Asynchronous Events Supported 00:51:25.397 Namespace Attribute Notices: Supported 00:51:25.397 Firmware Activation Notices: Not Supported 00:51:25.397 ANA Change Notices: Supported 00:51:25.397 PLE Aggregate Log Change Notices: Not Supported 00:51:25.397 LBA Status Info Alert Notices: Not Supported 00:51:25.397 EGE Aggregate Log Change Notices: Not Supported 00:51:25.397 Normal NVM Subsystem Shutdown event: Not Supported 00:51:25.397 Zone Descriptor Change Notices: Not Supported 00:51:25.397 Discovery Log Change Notices: Not Supported 00:51:25.397 Controller Attributes 00:51:25.397 128-bit Host Identifier: Supported 00:51:25.397 Non-Operational Permissive Mode: Not Supported 00:51:25.397 NVM Sets: Not Supported 00:51:25.397 Read Recovery Levels: Not Supported 00:51:25.397 Endurance Groups: Not Supported 00:51:25.397 Predictable Latency Mode: Not Supported 00:51:25.397 Traffic Based Keep ALive: Supported 00:51:25.397 Namespace Granularity: Not Supported 00:51:25.397 SQ Associations: Not Supported 00:51:25.397 UUID List: Not Supported 00:51:25.397 Multi-Domain Subsystem: Not Supported 00:51:25.397 Fixed Capacity Management: Not Supported 00:51:25.397 Variable Capacity Management: Not Supported 00:51:25.397 Delete Endurance Group: Not Supported 00:51:25.397 Delete NVM Set: Not Supported 00:51:25.397 Extended LBA Formats Supported: Not Supported 00:51:25.397 Flexible Data Placement Supported: Not Supported 00:51:25.397 00:51:25.397 Controller Memory Buffer Support 00:51:25.397 ================================ 00:51:25.397 Supported: No 00:51:25.397 00:51:25.397 Persistent Memory Region Support 00:51:25.397 ================================ 00:51:25.397 Supported: No 00:51:25.397 00:51:25.397 Admin Command Set Attributes 00:51:25.397 ============================ 00:51:25.397 Security Send/Receive: Not Supported 00:51:25.397 Format NVM: Not Supported 00:51:25.397 Firmware Activate/Download: Not Supported 00:51:25.397 Namespace Management: Not Supported 00:51:25.397 Device Self-Test: Not Supported 00:51:25.397 Directives: Not Supported 00:51:25.397 NVMe-MI: Not Supported 00:51:25.397 Virtualization Management: Not Supported 00:51:25.397 Doorbell Buffer Config: Not Supported 00:51:25.397 Get LBA Status Capability: Not Supported 00:51:25.397 Command & Feature Lockdown Capability: Not Supported 00:51:25.397 Abort Command Limit: 4 00:51:25.397 Async Event Request Limit: 4 00:51:25.397 Number of Firmware Slots: N/A 00:51:25.397 Firmware Slot 1 Read-Only: N/A 00:51:25.397 Firmware Activation Without Reset: N/A 00:51:25.397 Multiple Update Detection Support: N/A 00:51:25.397 Firmware Update Granularity: No Information Provided 00:51:25.397 Per-Namespace SMART Log: Yes 00:51:25.397 Asymmetric Namespace Access Log Page: Supported 00:51:25.397 ANA Transition Time : 10 sec 00:51:25.397 00:51:25.397 Asymmetric Namespace Access Capabilities 00:51:25.397 ANA Optimized State : Supported 00:51:25.397 ANA Non-Optimized State : Supported 00:51:25.397 ANA Inaccessible State : Supported 00:51:25.397 ANA Persistent Loss State : Supported 00:51:25.397 ANA Change State : Supported 00:51:25.397 ANAGRPID is not changed : No 00:51:25.397 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:51:25.397 00:51:25.397 ANA Group Identifier Maximum : 128 00:51:25.397 Number of ANA Group Identifiers : 128 00:51:25.397 Max Number of Allowed Namespaces : 1024 00:51:25.397 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:51:25.397 Command Effects Log Page: Supported 00:51:25.397 Get Log Page Extended Data: Supported 00:51:25.397 Telemetry Log Pages: Not Supported 00:51:25.397 Persistent Event Log Pages: Not Supported 00:51:25.397 Supported Log Pages Log Page: May Support 00:51:25.397 Commands Supported & Effects Log Page: Not Supported 00:51:25.397 Feature Identifiers & Effects Log Page:May Support 00:51:25.397 NVMe-MI Commands & Effects Log Page: May Support 00:51:25.397 Data Area 4 for Telemetry Log: Not Supported 00:51:25.397 Error Log Page Entries Supported: 128 00:51:25.397 Keep Alive: Supported 00:51:25.397 Keep Alive Granularity: 1000 ms 00:51:25.397 00:51:25.397 NVM Command Set Attributes 00:51:25.397 ========================== 00:51:25.397 Submission Queue Entry Size 00:51:25.397 Max: 64 00:51:25.397 Min: 64 00:51:25.397 Completion Queue Entry Size 00:51:25.397 Max: 16 00:51:25.397 Min: 16 00:51:25.397 Number of Namespaces: 1024 00:51:25.397 Compare Command: Not Supported 00:51:25.397 Write Uncorrectable Command: Not Supported 00:51:25.397 Dataset Management Command: Supported 00:51:25.397 Write Zeroes Command: Supported 00:51:25.397 Set Features Save Field: Not Supported 00:51:25.397 Reservations: Not Supported 00:51:25.397 Timestamp: Not Supported 00:51:25.397 Copy: Not Supported 00:51:25.397 Volatile Write Cache: Present 00:51:25.397 Atomic Write Unit (Normal): 1 00:51:25.397 Atomic Write Unit (PFail): 1 00:51:25.397 Atomic Compare & Write Unit: 1 00:51:25.397 Fused Compare & Write: Not Supported 00:51:25.397 Scatter-Gather List 00:51:25.397 SGL Command Set: Supported 00:51:25.397 SGL Keyed: Not Supported 00:51:25.397 SGL Bit Bucket Descriptor: Not Supported 00:51:25.397 SGL Metadata Pointer: Not Supported 00:51:25.397 Oversized SGL: Not Supported 00:51:25.397 SGL Metadata Address: Not Supported 00:51:25.397 SGL Offset: Supported 00:51:25.397 Transport SGL Data Block: Not Supported 00:51:25.397 Replay Protected Memory Block: Not Supported 00:51:25.397 00:51:25.397 Firmware Slot Information 00:51:25.397 ========================= 00:51:25.397 Active slot: 0 00:51:25.397 00:51:25.397 Asymmetric Namespace Access 00:51:25.397 =========================== 00:51:25.397 Change Count : 0 00:51:25.397 Number of ANA Group Descriptors : 1 00:51:25.397 ANA Group Descriptor : 0 00:51:25.397 ANA Group ID : 1 00:51:25.397 Number of NSID Values : 1 00:51:25.397 Change Count : 0 00:51:25.397 ANA State : 1 00:51:25.397 Namespace Identifier : 1 00:51:25.397 00:51:25.397 Commands Supported and Effects 00:51:25.397 ============================== 00:51:25.397 Admin Commands 00:51:25.397 -------------- 00:51:25.397 Get Log Page (02h): Supported 00:51:25.397 Identify (06h): Supported 00:51:25.397 Abort (08h): Supported 00:51:25.397 Set Features (09h): Supported 00:51:25.397 Get Features (0Ah): Supported 00:51:25.397 Asynchronous Event Request (0Ch): Supported 00:51:25.397 Keep Alive (18h): Supported 00:51:25.397 I/O Commands 00:51:25.397 ------------ 00:51:25.397 Flush (00h): Supported 00:51:25.397 Write (01h): Supported LBA-Change 00:51:25.397 Read (02h): Supported 00:51:25.397 Write Zeroes (08h): Supported LBA-Change 00:51:25.397 Dataset Management (09h): Supported 00:51:25.397 00:51:25.397 Error Log 00:51:25.397 ========= 00:51:25.397 Entry: 0 00:51:25.397 Error Count: 0x3 00:51:25.397 Submission Queue Id: 0x0 00:51:25.397 Command Id: 0x5 00:51:25.397 Phase Bit: 0 00:51:25.397 Status Code: 0x2 00:51:25.397 Status Code Type: 0x0 00:51:25.397 Do Not Retry: 1 00:51:25.397 Error Location: 0x28 00:51:25.397 LBA: 0x0 00:51:25.397 Namespace: 0x0 00:51:25.397 Vendor Log Page: 0x0 00:51:25.397 ----------- 00:51:25.397 Entry: 1 00:51:25.397 Error Count: 0x2 00:51:25.397 Submission Queue Id: 0x0 00:51:25.397 Command Id: 0x5 00:51:25.397 Phase Bit: 0 00:51:25.397 Status Code: 0x2 00:51:25.397 Status Code Type: 0x0 00:51:25.397 Do Not Retry: 1 00:51:25.397 Error Location: 0x28 00:51:25.397 LBA: 0x0 00:51:25.397 Namespace: 0x0 00:51:25.397 Vendor Log Page: 0x0 00:51:25.397 ----------- 00:51:25.397 Entry: 2 00:51:25.397 Error Count: 0x1 00:51:25.397 Submission Queue Id: 0x0 00:51:25.397 Command Id: 0x4 00:51:25.397 Phase Bit: 0 00:51:25.397 Status Code: 0x2 00:51:25.397 Status Code Type: 0x0 00:51:25.397 Do Not Retry: 1 00:51:25.397 Error Location: 0x28 00:51:25.397 LBA: 0x0 00:51:25.397 Namespace: 0x0 00:51:25.397 Vendor Log Page: 0x0 00:51:25.397 00:51:25.397 Number of Queues 00:51:25.397 ================ 00:51:25.397 Number of I/O Submission Queues: 128 00:51:25.397 Number of I/O Completion Queues: 128 00:51:25.397 00:51:25.397 ZNS Specific Controller Data 00:51:25.397 ============================ 00:51:25.397 Zone Append Size Limit: 0 00:51:25.397 00:51:25.397 00:51:25.397 Active Namespaces 00:51:25.397 ================= 00:51:25.397 get_feature(0x05) failed 00:51:25.397 Namespace ID:1 00:51:25.397 Command Set Identifier: NVM (00h) 00:51:25.397 Deallocate: Supported 00:51:25.397 Deallocated/Unwritten Error: Not Supported 00:51:25.397 Deallocated Read Value: Unknown 00:51:25.397 Deallocate in Write Zeroes: Not Supported 00:51:25.397 Deallocated Guard Field: 0xFFFF 00:51:25.397 Flush: Supported 00:51:25.397 Reservation: Not Supported 00:51:25.397 Namespace Sharing Capabilities: Multiple Controllers 00:51:25.397 Size (in LBAs): 1310720 (5GiB) 00:51:25.397 Capacity (in LBAs): 1310720 (5GiB) 00:51:25.397 Utilization (in LBAs): 1310720 (5GiB) 00:51:25.397 UUID: 24b5df5b-839e-450d-a52d-f879df52871d 00:51:25.397 Thin Provisioning: Not Supported 00:51:25.397 Per-NS Atomic Units: Yes 00:51:25.397 Atomic Boundary Size (Normal): 0 00:51:25.397 Atomic Boundary Size (PFail): 0 00:51:25.397 Atomic Boundary Offset: 0 00:51:25.397 NGUID/EUI64 Never Reused: No 00:51:25.397 ANA group ID: 1 00:51:25.397 Namespace Write Protected: No 00:51:25.397 Number of LBA Formats: 1 00:51:25.397 Current LBA Format: LBA Format #00 00:51:25.397 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:51:25.397 00:51:25.397 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:51:25.397 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:51:25.397 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:51:25.657 rmmod nvme_tcp 00:51:25.657 rmmod nvme_fabrics 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:51:25.657 10:03:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:25.657 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:51:25.916 10:03:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:51:26.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:26.742 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:51:26.742 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:51:26.742 ************************************ 00:51:26.742 END TEST nvmf_identify_kernel_target 00:51:26.742 ************************************ 00:51:26.742 00:51:26.742 real 0m3.399s 00:51:26.742 user 0m1.324s 00:51:26.742 sys 0m1.411s 00:51:26.742 10:03:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:26.742 10:03:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:51:26.742 10:03:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:51:26.742 10:03:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:51:26.742 10:03:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:26.742 10:03:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:51:26.742 ************************************ 00:51:26.742 START TEST nvmf_auth_host 00:51:26.742 ************************************ 00:51:26.742 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:51:27.002 * Looking for test storage... 00:51:27.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:51:27.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:27.002 --rc genhtml_branch_coverage=1 00:51:27.002 --rc genhtml_function_coverage=1 00:51:27.002 --rc genhtml_legend=1 00:51:27.002 --rc geninfo_all_blocks=1 00:51:27.002 --rc geninfo_unexecuted_blocks=1 00:51:27.002 00:51:27.002 ' 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:51:27.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:27.002 --rc genhtml_branch_coverage=1 00:51:27.002 --rc genhtml_function_coverage=1 00:51:27.002 --rc genhtml_legend=1 00:51:27.002 --rc geninfo_all_blocks=1 00:51:27.002 --rc geninfo_unexecuted_blocks=1 00:51:27.002 00:51:27.002 ' 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:51:27.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:27.002 --rc genhtml_branch_coverage=1 00:51:27.002 --rc genhtml_function_coverage=1 00:51:27.002 --rc genhtml_legend=1 00:51:27.002 --rc geninfo_all_blocks=1 00:51:27.002 --rc geninfo_unexecuted_blocks=1 00:51:27.002 00:51:27.002 ' 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:51:27.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:27.002 --rc genhtml_branch_coverage=1 00:51:27.002 --rc genhtml_function_coverage=1 00:51:27.002 --rc genhtml_legend=1 00:51:27.002 --rc geninfo_all_blocks=1 00:51:27.002 --rc geninfo_unexecuted_blocks=1 00:51:27.002 00:51:27.002 ' 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:27.002 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:51:27.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:51:27.003 Cannot find device "nvmf_init_br" 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:51:27.003 Cannot find device "nvmf_init_br2" 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:51:27.003 Cannot find device "nvmf_tgt_br" 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:51:27.003 Cannot find device "nvmf_tgt_br2" 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:51:27.003 Cannot find device "nvmf_init_br" 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:51:27.003 Cannot find device "nvmf_init_br2" 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:51:27.003 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:51:27.003 Cannot find device "nvmf_tgt_br" 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:51:27.262 Cannot find device "nvmf_tgt_br2" 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:51:27.262 Cannot find device "nvmf_br" 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:51:27.262 Cannot find device "nvmf_init_if" 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:51:27.262 Cannot find device "nvmf_init_if2" 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:27.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:27.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:51:27.262 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:51:27.263 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:51:27.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:27.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:51:27.522 00:51:27.522 --- 10.0.0.3 ping statistics --- 00:51:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:27.522 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:51:27.522 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:51:27.522 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:51:27.522 00:51:27.522 --- 10.0.0.4 ping statistics --- 00:51:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:27.522 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:27.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:27.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:51:27.522 00:51:27.522 --- 10.0.0.1 ping statistics --- 00:51:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:27.522 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:51:27.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:27.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:51:27.522 00:51:27.522 --- 10.0.0.2 ping statistics --- 00:51:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:27.522 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78357 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78357 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78357 ']' 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:27.522 10:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eef789933b3d753b3ae7301a1837b0fc 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.I3L 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eef789933b3d753b3ae7301a1837b0fc 0 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eef789933b3d753b3ae7301a1837b0fc 0 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eef789933b3d753b3ae7301a1837b0fc 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:51:27.781 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.I3L 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.I3L 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.I3L 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8d72470fc82e8fedf82c91c7bbdc58c028b328a03abfb9b0cc71fdbb44f473b3 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rvm 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8d72470fc82e8fedf82c91c7bbdc58c028b328a03abfb9b0cc71fdbb44f473b3 3 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8d72470fc82e8fedf82c91c7bbdc58c028b328a03abfb9b0cc71fdbb44f473b3 3 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8d72470fc82e8fedf82c91c7bbdc58c028b328a03abfb9b0cc71fdbb44f473b3 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rvm 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rvm 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.rvm 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a8aa1bba8a45f162bf93ec8419855e20dfdf49a5da2ba6c8 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DRB 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a8aa1bba8a45f162bf93ec8419855e20dfdf49a5da2ba6c8 0 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a8aa1bba8a45f162bf93ec8419855e20dfdf49a5da2ba6c8 0 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a8aa1bba8a45f162bf93ec8419855e20dfdf49a5da2ba6c8 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DRB 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DRB 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.DRB 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=002791f0088bfea5be4b2df47ea95763449d8c7f18aeedb0 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Asr 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 002791f0088bfea5be4b2df47ea95763449d8c7f18aeedb0 2 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 002791f0088bfea5be4b2df47ea95763449d8c7f18aeedb0 2 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=002791f0088bfea5be4b2df47ea95763449d8c7f18aeedb0 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Asr 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Asr 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Asr 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=354cf6901f22f4445da997f2b6d839b0 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.urp 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 354cf6901f22f4445da997f2b6d839b0 1 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 354cf6901f22f4445da997f2b6d839b0 1 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=354cf6901f22f4445da997f2b6d839b0 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:51:28.040 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.urp 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.urp 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.urp 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f8873b8b061a5385f8c634017817a510 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Brs 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f8873b8b061a5385f8c634017817a510 1 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f8873b8b061a5385f8c634017817a510 1 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f8873b8b061a5385f8c634017817a510 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Brs 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Brs 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Brs 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=92fff864ae5643eb69f0a8eb992c2288c24bc6bca3961bbf 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SuI 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 92fff864ae5643eb69f0a8eb992c2288c24bc6bca3961bbf 2 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 92fff864ae5643eb69f0a8eb992c2288c24bc6bca3961bbf 2 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=92fff864ae5643eb69f0a8eb992c2288c24bc6bca3961bbf 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SuI 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SuI 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SuI 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:51:28.300 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d6a88e256fdc405b25684413dff3b535 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rGa 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d6a88e256fdc405b25684413dff3b535 0 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d6a88e256fdc405b25684413dff3b535 0 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d6a88e256fdc405b25684413dff3b535 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rGa 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rGa 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rGa 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0c63114d4d944c2141b67766b4091502ad867455e92479248fcb453ec7f87a94 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9GE 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0c63114d4d944c2141b67766b4091502ad867455e92479248fcb453ec7f87a94 3 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0c63114d4d944c2141b67766b4091502ad867455e92479248fcb453ec7f87a94 3 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0c63114d4d944c2141b67766b4091502ad867455e92479248fcb453ec7f87a94 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:51:28.301 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9GE 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9GE 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.9GE 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78357 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78357 ']' 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:28.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:28.617 10:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.I3L 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.rvm ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rvm 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.DRB 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Asr ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Asr 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.urp 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Brs ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Brs 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SuI 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rGa ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rGa 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.9GE 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:51:28.876 10:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:51:29.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:29.134 Waiting for block devices as requested 00:51:29.393 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:51:29.393 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:51:29.959 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:51:29.960 No valid GPT data, bailing 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:51:29.960 No valid GPT data, bailing 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:51:29.960 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:51:30.218 No valid GPT data, bailing 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:51:30.218 No valid GPT data, bailing 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:51:30.218 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid=eacce601-ab17-4447-87df-663b0f4f5455 -a 10.0.0.1 -t tcp -s 4420 00:51:30.219 00:51:30.219 Discovery Log Number of Records 2, Generation counter 2 00:51:30.219 =====Discovery Log Entry 0====== 00:51:30.219 trtype: tcp 00:51:30.219 adrfam: ipv4 00:51:30.219 subtype: current discovery subsystem 00:51:30.219 treq: not specified, sq flow control disable supported 00:51:30.219 portid: 1 00:51:30.219 trsvcid: 4420 00:51:30.219 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:51:30.219 traddr: 10.0.0.1 00:51:30.219 eflags: none 00:51:30.219 sectype: none 00:51:30.219 =====Discovery Log Entry 1====== 00:51:30.219 trtype: tcp 00:51:30.219 adrfam: ipv4 00:51:30.219 subtype: nvme subsystem 00:51:30.219 treq: not specified, sq flow control disable supported 00:51:30.219 portid: 1 00:51:30.219 trsvcid: 4420 00:51:30.219 subnqn: nqn.2024-02.io.spdk:cnode0 00:51:30.219 traddr: 10.0.0.1 00:51:30.219 eflags: none 00:51:30.219 sectype: none 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:30.219 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.477 nvme0n1 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.477 10:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.735 nvme0n1 00:51:30.735 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.735 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:30.735 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:30.735 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.735 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.735 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.735 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:30.735 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.736 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.995 nvme0n1 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.995 nvme0n1 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:30.995 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.254 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.254 nvme0n1 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.255 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.514 nvme0n1 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:31.514 10:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:31.773 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.032 nvme0n1 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.032 nvme0n1 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.032 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.292 nvme0n1 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:32.292 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.293 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.552 nvme0n1 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.552 10:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.811 nvme0n1 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:32.811 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.378 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:33.637 nvme0n1 00:51:33.637 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.637 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:33.637 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.637 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:33.637 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:33.637 10:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.637 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:33.896 nvme0n1 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:33.896 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.156 nvme0n1 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:34.156 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.157 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.416 nvme0n1 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.416 10:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.674 nvme0n1 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:34.674 10:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:36.596 10:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:36.881 nvme0n1 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:36.881 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:37.449 nvme0n1 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:37.449 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.450 10:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:37.708 nvme0n1 00:51:37.708 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.708 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:37.708 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.708 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:37.708 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:37.708 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:37.966 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:37.967 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:38.225 nvme0n1 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:38.225 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:38.226 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:38.226 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:38.226 10:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:38.793 nvme0n1 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:38.793 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:38.794 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:39.361 nvme0n1 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:39.361 10:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:40.296 nvme0n1 00:51:40.296 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.296 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:40.296 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.296 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:40.296 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:40.296 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.297 10:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:40.864 nvme0n1 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:40.864 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:41.432 nvme0n1 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:41.432 10:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.367 nvme0n1 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.367 nvme0n1 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:42.367 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.368 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.626 nvme0n1 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:42.626 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.627 10:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.627 nvme0n1 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.627 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.886 nvme0n1 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:42.886 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.145 nvme0n1 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.145 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.146 nvme0n1 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.146 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.405 nvme0n1 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.405 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:43.406 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:43.664 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:43.664 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.664 10:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.664 nvme0n1 00:51:43.664 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.664 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:43.664 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.664 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.664 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:43.664 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.664 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:43.664 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.665 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.924 nvme0n1 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:43.924 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.183 nvme0n1 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.183 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.442 nvme0n1 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.442 10:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.718 nvme0n1 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:44.718 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.000 nvme0n1 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.000 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.259 nvme0n1 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.259 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.518 nvme0n1 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:45.518 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:45.519 10:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.086 nvme0n1 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.086 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.348 nvme0n1 00:51:46.348 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.348 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:46.348 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:46.349 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:46.350 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:46.350 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:46.350 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:46.350 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:46.350 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.350 10:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.921 nvme0n1 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:46.921 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:46.922 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:47.181 nvme0n1 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:47.181 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:47.440 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:47.441 10:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:47.700 nvme0n1 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:47.700 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:48.636 nvme0n1 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:48.636 10:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:49.204 nvme0n1 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:49.204 10:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:49.772 nvme0n1 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:49.772 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:50.032 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:50.601 nvme0n1 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:50.601 10:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:50.601 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.169 nvme0n1 00:51:51.169 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.169 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:51.169 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:51.169 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.169 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.169 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.430 nvme0n1 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.430 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.690 nvme0n1 00:51:51.690 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.690 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:51.690 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:51.690 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.690 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.690 10:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.690 nvme0n1 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.690 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.950 nvme0n1 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:51.950 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:51.951 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:51.951 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.951 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.210 nvme0n1 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.210 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.471 nvme0n1 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.471 nvme0n1 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.471 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.755 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:52.755 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:52.755 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.755 10:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.755 nvme0n1 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:52.755 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.756 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.026 nvme0n1 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.026 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.286 nvme0n1 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.286 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.545 nvme0n1 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:53.545 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.546 10:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.804 nvme0n1 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.804 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.063 nvme0n1 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.063 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.321 nvme0n1 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.321 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.580 nvme0n1 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:54.580 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.581 10:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.147 nvme0n1 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.147 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.405 nvme0n1 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.405 10:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.973 nvme0n1 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:55.973 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.974 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:56.232 nvme0n1 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:56.232 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.233 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.491 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:56.749 nvme0n1 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVmNzg5OTMzYjNkNzUzYjNhZTczMDFhMTgzN2IwZmMQp1Su: 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: ]] 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGQ3MjQ3MGZjODJlOGZlZGY4MmM5MWM3YmJkYzU4YzAyOGIzMjhhMDNhYmZiOWIwY2M3MWZkYmI0NGY0NzNiM8jBcBA=: 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.750 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:57.341 nvme0n1 00:51:57.341 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:57.341 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:57.341 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:57.341 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:57.341 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:57.341 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:57.600 10:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:58.166 nvme0n1 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:58.166 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:58.167 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:58.167 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:58.167 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:58.167 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:58.167 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:51:58.167 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.167 10:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:58.733 nvme0n1 00:51:58.733 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.733 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:58.733 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:58.733 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.733 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:58.733 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTJmZmY4NjRhZTU2NDNlYjY5ZjBhOGViOTkyYzIyODhjMjRiYzZiY2EzOTYxYmJmW7iXKg==: 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: ]] 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDZhODhlMjU2ZmRjNDA1YjI1Njg0NDEzZGZmM2I1MzVoutwF: 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.991 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:59.559 nvme0n1 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM2MzExNGQ0ZDk0NGMyMTQxYjY3NzY2YjQwOTE1MDJhZDg2NzQ1NWU5MjQ3OTI0OGZjYjQ1M2VjN2Y4N2E5NOCN6dY=: 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:59.559 10:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.127 nvme0n1 00:52:00.127 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.127 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:00.127 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.127 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.127 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:00.127 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.458 request: 00:52:00.458 { 00:52:00.458 "name": "nvme0", 00:52:00.458 "trtype": "tcp", 00:52:00.458 "traddr": "10.0.0.1", 00:52:00.458 "adrfam": "ipv4", 00:52:00.458 "trsvcid": "4420", 00:52:00.458 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:52:00.458 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:52:00.458 "prchk_reftag": false, 00:52:00.458 "prchk_guard": false, 00:52:00.458 "hdgst": false, 00:52:00.458 "ddgst": false, 00:52:00.458 "allow_unrecognized_csi": false, 00:52:00.458 "method": "bdev_nvme_attach_controller", 00:52:00.458 "req_id": 1 00:52:00.458 } 00:52:00.458 Got JSON-RPC error response 00:52:00.458 response: 00:52:00.458 { 00:52:00.458 "code": -5, 00:52:00.458 "message": "Input/output error" 00:52:00.458 } 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.458 request: 00:52:00.458 { 00:52:00.458 "name": "nvme0", 00:52:00.458 "trtype": "tcp", 00:52:00.458 "traddr": "10.0.0.1", 00:52:00.458 "adrfam": "ipv4", 00:52:00.458 "trsvcid": "4420", 00:52:00.458 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:52:00.458 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:52:00.458 "prchk_reftag": false, 00:52:00.458 "prchk_guard": false, 00:52:00.458 "hdgst": false, 00:52:00.458 "ddgst": false, 00:52:00.458 "dhchap_key": "key2", 00:52:00.458 "allow_unrecognized_csi": false, 00:52:00.458 "method": "bdev_nvme_attach_controller", 00:52:00.458 "req_id": 1 00:52:00.458 } 00:52:00.458 Got JSON-RPC error response 00:52:00.458 response: 00:52:00.458 { 00:52:00.458 "code": -5, 00:52:00.458 "message": "Input/output error" 00:52:00.458 } 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:00.458 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.459 request: 00:52:00.459 { 00:52:00.459 "name": "nvme0", 00:52:00.459 "trtype": "tcp", 00:52:00.459 "traddr": "10.0.0.1", 00:52:00.459 "adrfam": "ipv4", 00:52:00.459 "trsvcid": "4420", 00:52:00.459 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:52:00.459 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:52:00.459 "prchk_reftag": false, 00:52:00.459 "prchk_guard": false, 00:52:00.459 "hdgst": false, 00:52:00.459 "ddgst": false, 00:52:00.459 "dhchap_key": "key1", 00:52:00.459 "dhchap_ctrlr_key": "ckey2", 00:52:00.459 "allow_unrecognized_csi": false, 00:52:00.459 "method": "bdev_nvme_attach_controller", 00:52:00.459 "req_id": 1 00:52:00.459 } 00:52:00.459 Got JSON-RPC error response 00:52:00.459 response: 00:52:00.459 { 00:52:00.459 "code": -5, 00:52:00.459 "message": "Input/output error" 00:52:00.459 } 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.459 10:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.718 nvme0n1 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.718 request: 00:52:00.718 { 00:52:00.718 "name": "nvme0", 00:52:00.718 "dhchap_key": "key1", 00:52:00.718 "dhchap_ctrlr_key": "ckey2", 00:52:00.718 "method": "bdev_nvme_set_keys", 00:52:00.718 "req_id": 1 00:52:00.718 } 00:52:00.718 Got JSON-RPC error response 00:52:00.718 response: 00:52:00.718 { 00:52:00.718 "code": -13, 00:52:00.718 "message": "Permission denied" 00:52:00.718 } 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:52:00.718 10:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThhYTFiYmE4YTQ1ZjE2MmJmOTNlYzg0MTk4NTVlMjBkZmRmNDlhNWRhMmJhNmM4nliuUg==: 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: ]] 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyNzkxZjAwODhiZmVhNWJlNGIyZGY0N2VhOTU3NjM0NDlkOGM3ZjE4YWVlZGIwiL3h7Q==: 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:52:02.092 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:02.093 nvme0n1 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU0Y2Y2OTAxZjIyZjQ0NDVkYTk5N2YyYjZkODM5YjCOBpbM: 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: ]] 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Zjg4NzNiOGIwNjFhNTM4NWY4YzYzNDAxNzgxN2E1MTABYxHb: 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:02.093 request: 00:52:02.093 { 00:52:02.093 "name": "nvme0", 00:52:02.093 "dhchap_key": "key2", 00:52:02.093 "dhchap_ctrlr_key": "ckey1", 00:52:02.093 "method": "bdev_nvme_set_keys", 00:52:02.093 "req_id": 1 00:52:02.093 } 00:52:02.093 Got JSON-RPC error response 00:52:02.093 response: 00:52:02.093 { 00:52:02.093 "code": -13, 00:52:02.093 "message": "Permission denied" 00:52:02.093 } 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:52:02.093 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:52:03.028 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:52:03.028 rmmod nvme_tcp 00:52:03.287 rmmod nvme_fabrics 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78357 ']' 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78357 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78357 ']' 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78357 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78357 00:52:03.287 killing process with pid 78357 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78357' 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78357 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78357 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:52:03.287 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:52:03.546 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:52:03.546 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:52:03.546 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:52:03.547 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:52:03.547 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:52:03.806 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:52:04.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:04.373 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:52:04.373 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:52:04.632 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.I3L /tmp/spdk.key-null.DRB /tmp/spdk.key-sha256.urp /tmp/spdk.key-sha384.SuI /tmp/spdk.key-sha512.9GE /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:52:04.632 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:52:04.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:04.890 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:52:04.890 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:52:04.890 00:52:04.890 real 0m38.087s 00:52:04.890 user 0m34.367s 00:52:04.890 sys 0m3.700s 00:52:04.890 10:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:04.890 10:04:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:04.890 ************************************ 00:52:04.890 END TEST nvmf_auth_host 00:52:04.890 ************************************ 00:52:04.890 10:04:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:52:04.890 10:04:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:52:04.890 10:04:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:52:04.890 10:04:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:04.890 10:04:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:52:04.890 ************************************ 00:52:04.890 START TEST nvmf_digest 00:52:04.890 ************************************ 00:52:04.890 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:52:05.150 * Looking for test storage... 00:52:05.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:05.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:05.150 --rc genhtml_branch_coverage=1 00:52:05.150 --rc genhtml_function_coverage=1 00:52:05.150 --rc genhtml_legend=1 00:52:05.150 --rc geninfo_all_blocks=1 00:52:05.150 --rc geninfo_unexecuted_blocks=1 00:52:05.150 00:52:05.150 ' 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:05.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:05.150 --rc genhtml_branch_coverage=1 00:52:05.150 --rc genhtml_function_coverage=1 00:52:05.150 --rc genhtml_legend=1 00:52:05.150 --rc geninfo_all_blocks=1 00:52:05.150 --rc geninfo_unexecuted_blocks=1 00:52:05.150 00:52:05.150 ' 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:05.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:05.150 --rc genhtml_branch_coverage=1 00:52:05.150 --rc genhtml_function_coverage=1 00:52:05.150 --rc genhtml_legend=1 00:52:05.150 --rc geninfo_all_blocks=1 00:52:05.150 --rc geninfo_unexecuted_blocks=1 00:52:05.150 00:52:05.150 ' 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:05.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:05.150 --rc genhtml_branch_coverage=1 00:52:05.150 --rc genhtml_function_coverage=1 00:52:05.150 --rc genhtml_legend=1 00:52:05.150 --rc geninfo_all_blocks=1 00:52:05.150 --rc geninfo_unexecuted_blocks=1 00:52:05.150 00:52:05.150 ' 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:05.150 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:52:05.151 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:52:05.151 Cannot find device "nvmf_init_br" 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:52:05.151 Cannot find device "nvmf_init_br2" 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:52:05.151 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:52:05.411 Cannot find device "nvmf_tgt_br" 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:52:05.411 Cannot find device "nvmf_tgt_br2" 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:52:05.411 Cannot find device "nvmf_init_br" 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:52:05.411 Cannot find device "nvmf_init_br2" 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:52:05.411 Cannot find device "nvmf_tgt_br" 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:52:05.411 Cannot find device "nvmf_tgt_br2" 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:52:05.411 Cannot find device "nvmf_br" 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:52:05.411 Cannot find device "nvmf_init_if" 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:52:05.411 Cannot find device "nvmf_init_if2" 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:52:05.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:52:05.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:52:05.411 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:52:05.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:52:05.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:52:05.671 00:52:05.671 --- 10.0.0.3 ping statistics --- 00:52:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:05.671 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:52:05.671 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:52:05.671 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:52:05.671 00:52:05.671 --- 10.0.0.4 ping statistics --- 00:52:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:05.671 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:52:05.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:05.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:52:05.671 00:52:05.671 --- 10.0.0.1 ping statistics --- 00:52:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:05.671 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:52:05.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:05.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:52:05.671 00:52:05.671 --- 10.0.0.2 ping statistics --- 00:52:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:05.671 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:52:05.671 10:04:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:52:05.671 ************************************ 00:52:05.671 START TEST nvmf_digest_clean 00:52:05.671 ************************************ 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80015 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80015 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80015 ']' 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:05.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:05.671 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:52:05.671 [2024-12-09 10:04:13.087665] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:05.671 [2024-12-09 10:04:13.087754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:05.930 [2024-12-09 10:04:13.247736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:05.930 [2024-12-09 10:04:13.285459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:05.930 [2024-12-09 10:04:13.285531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:05.930 [2024-12-09 10:04:13.285546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:05.930 [2024-12-09 10:04:13.285556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:05.930 [2024-12-09 10:04:13.285566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:05.930 [2024-12-09 10:04:13.285928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:05.930 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:52:05.930 [2024-12-09 10:04:13.420865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:06.189 null0 00:52:06.189 [2024-12-09 10:04:13.460150] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:06.189 [2024-12-09 10:04:13.484339] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80034 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80034 /var/tmp/bperf.sock 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80034 ']' 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:06.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:06.189 10:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:52:06.189 [2024-12-09 10:04:13.541390] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:06.189 [2024-12-09 10:04:13.541486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80034 ] 00:52:06.447 [2024-12-09 10:04:13.703047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:06.447 [2024-12-09 10:04:13.741905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:07.383 10:04:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:07.383 10:04:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:52:07.383 10:04:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:52:07.384 10:04:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:52:07.384 10:04:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:52:07.642 [2024-12-09 10:04:14.915381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:07.642 10:04:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:07.642 10:04:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:07.900 nvme0n1 00:52:07.900 10:04:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:52:07.900 10:04:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:52:08.158 Running I/O for 2 seconds... 00:52:10.029 14224.00 IOPS, 55.56 MiB/s [2024-12-09T10:04:17.531Z] 14224.00 IOPS, 55.56 MiB/s 00:52:10.030 Latency(us) 00:52:10.030 [2024-12-09T10:04:17.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:10.030 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:52:10.030 nvme0n1 : 2.01 14211.91 55.52 0.00 0.00 8999.75 8162.21 18707.55 00:52:10.030 [2024-12-09T10:04:17.532Z] =================================================================================================================== 00:52:10.030 [2024-12-09T10:04:17.532Z] Total : 14211.91 55.52 0.00 0.00 8999.75 8162.21 18707.55 00:52:10.030 { 00:52:10.030 "results": [ 00:52:10.030 { 00:52:10.030 "job": "nvme0n1", 00:52:10.030 "core_mask": "0x2", 00:52:10.030 "workload": "randread", 00:52:10.030 "status": "finished", 00:52:10.030 "queue_depth": 128, 00:52:10.030 "io_size": 4096, 00:52:10.030 "runtime": 2.010708, 00:52:10.030 "iops": 14211.909436874972, 00:52:10.030 "mibps": 55.51527123779286, 00:52:10.030 "io_failed": 0, 00:52:10.030 "io_timeout": 0, 00:52:10.030 "avg_latency_us": 8999.753883742238, 00:52:10.030 "min_latency_us": 8162.210909090909, 00:52:10.030 "max_latency_us": 18707.54909090909 00:52:10.030 } 00:52:10.030 ], 00:52:10.030 "core_count": 1 00:52:10.030 } 00:52:10.030 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:52:10.030 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:52:10.030 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:52:10.030 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:52:10.030 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:52:10.030 | select(.opcode=="crc32c") 00:52:10.030 | "\(.module_name) \(.executed)"' 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80034 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80034 ']' 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80034 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:10.288 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80034 00:52:10.547 killing process with pid 80034 00:52:10.547 Received shutdown signal, test time was about 2.000000 seconds 00:52:10.547 00:52:10.547 Latency(us) 00:52:10.547 [2024-12-09T10:04:18.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:10.547 [2024-12-09T10:04:18.049Z] =================================================================================================================== 00:52:10.547 [2024-12-09T10:04:18.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80034' 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80034 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80034 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80094 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80094 /var/tmp/bperf.sock 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80094 ']' 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:52:10.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:10.547 10:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:52:10.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:52:10.807 Zero copy mechanism will not be used. 00:52:10.807 [2024-12-09 10:04:18.049041] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:10.807 [2024-12-09 10:04:18.049137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80094 ] 00:52:10.807 [2024-12-09 10:04:18.202454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:10.807 [2024-12-09 10:04:18.234739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:10.807 10:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:10.807 10:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:52:10.807 10:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:52:10.807 10:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:52:10.807 10:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:52:11.373 [2024-12-09 10:04:18.627582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:11.374 10:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:11.374 10:04:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:11.632 nvme0n1 00:52:11.632 10:04:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:52:11.632 10:04:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:52:11.891 I/O size of 131072 is greater than zero copy threshold (65536). 00:52:11.891 Zero copy mechanism will not be used. 00:52:11.891 Running I/O for 2 seconds... 00:52:13.762 7168.00 IOPS, 896.00 MiB/s [2024-12-09T10:04:21.264Z] 7208.00 IOPS, 901.00 MiB/s 00:52:13.762 Latency(us) 00:52:13.762 [2024-12-09T10:04:21.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:13.762 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:52:13.762 nvme0n1 : 2.00 7206.03 900.75 0.00 0.00 2216.68 2055.45 9711.24 00:52:13.762 [2024-12-09T10:04:21.264Z] =================================================================================================================== 00:52:13.762 [2024-12-09T10:04:21.264Z] Total : 7206.03 900.75 0.00 0.00 2216.68 2055.45 9711.24 00:52:13.762 { 00:52:13.762 "results": [ 00:52:13.762 { 00:52:13.762 "job": "nvme0n1", 00:52:13.762 "core_mask": "0x2", 00:52:13.762 "workload": "randread", 00:52:13.762 "status": "finished", 00:52:13.762 "queue_depth": 16, 00:52:13.762 "io_size": 131072, 00:52:13.762 "runtime": 2.002767, 00:52:13.762 "iops": 7206.03045686293, 00:52:13.762 "mibps": 900.7538071078662, 00:52:13.762 "io_failed": 0, 00:52:13.762 "io_timeout": 0, 00:52:13.763 "avg_latency_us": 2216.6761862527715, 00:52:13.763 "min_latency_us": 2055.447272727273, 00:52:13.763 "max_latency_us": 9711.243636363637 00:52:13.763 } 00:52:13.763 ], 00:52:13.763 "core_count": 1 00:52:13.763 } 00:52:13.763 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:52:13.763 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:52:13.763 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:52:13.763 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:52:13.763 | select(.opcode=="crc32c") 00:52:13.763 | "\(.module_name) \(.executed)"' 00:52:13.763 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80094 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80094 ']' 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80094 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:14.021 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80094 00:52:14.280 killing process with pid 80094 00:52:14.280 Received shutdown signal, test time was about 2.000000 seconds 00:52:14.280 00:52:14.280 Latency(us) 00:52:14.280 [2024-12-09T10:04:21.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:14.280 [2024-12-09T10:04:21.782Z] =================================================================================================================== 00:52:14.280 [2024-12-09T10:04:21.782Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80094' 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80094 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80094 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80147 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80147 /var/tmp/bperf.sock 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80147 ']' 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:52:14.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:14.280 10:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:52:14.280 [2024-12-09 10:04:21.773732] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:14.280 [2024-12-09 10:04:21.773828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80147 ] 00:52:14.539 [2024-12-09 10:04:21.920723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:14.539 [2024-12-09 10:04:21.953434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:14.539 10:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:14.539 10:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:52:14.539 10:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:52:14.539 10:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:52:14.539 10:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:52:15.106 [2024-12-09 10:04:22.308669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:15.106 10:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:15.106 10:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:15.365 nvme0n1 00:52:15.365 10:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:52:15.365 10:04:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:52:15.365 Running I/O for 2 seconds... 00:52:17.676 15495.00 IOPS, 60.53 MiB/s [2024-12-09T10:04:25.178Z] 15494.50 IOPS, 60.53 MiB/s 00:52:17.676 Latency(us) 00:52:17.676 [2024-12-09T10:04:25.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:17.676 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:52:17.676 nvme0n1 : 2.00 15523.71 60.64 0.00 0.00 8236.14 5749.29 16443.58 00:52:17.676 [2024-12-09T10:04:25.178Z] =================================================================================================================== 00:52:17.676 [2024-12-09T10:04:25.178Z] Total : 15523.71 60.64 0.00 0.00 8236.14 5749.29 16443.58 00:52:17.676 { 00:52:17.676 "results": [ 00:52:17.676 { 00:52:17.676 "job": "nvme0n1", 00:52:17.676 "core_mask": "0x2", 00:52:17.676 "workload": "randwrite", 00:52:17.676 "status": "finished", 00:52:17.676 "queue_depth": 128, 00:52:17.676 "io_size": 4096, 00:52:17.676 "runtime": 2.004482, 00:52:17.676 "iops": 15523.711362835886, 00:52:17.676 "mibps": 60.63949751107768, 00:52:17.676 "io_failed": 0, 00:52:17.676 "io_timeout": 0, 00:52:17.676 "avg_latency_us": 8236.144662227896, 00:52:17.676 "min_latency_us": 5749.294545454545, 00:52:17.676 "max_latency_us": 16443.578181818182 00:52:17.676 } 00:52:17.676 ], 00:52:17.676 "core_count": 1 00:52:17.676 } 00:52:17.676 10:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:52:17.676 10:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:52:17.676 10:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:52:17.676 10:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:52:17.676 | select(.opcode=="crc32c") 00:52:17.676 | "\(.module_name) \(.executed)"' 00:52:17.676 10:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:52:17.676 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:52:17.676 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:52:17.676 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80147 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80147 ']' 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80147 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80147 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:17.677 killing process with pid 80147 00:52:17.677 Received shutdown signal, test time was about 2.000000 seconds 00:52:17.677 00:52:17.677 Latency(us) 00:52:17.677 [2024-12-09T10:04:25.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:17.677 [2024-12-09T10:04:25.179Z] =================================================================================================================== 00:52:17.677 [2024-12-09T10:04:25.179Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80147' 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80147 00:52:17.677 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80147 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80201 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80201 /var/tmp/bperf.sock 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80201 ']' 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:52:17.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:17.935 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:52:17.935 [2024-12-09 10:04:25.341001] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:17.935 [2024-12-09 10:04:25.341308] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:52:17.935 Zero copy mechanism will not be used. 00:52:17.935 llocations --file-prefix=spdk_pid80201 ] 00:52:18.194 [2024-12-09 10:04:25.487196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:18.194 [2024-12-09 10:04:25.520652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:18.194 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:18.194 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:52:18.194 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:52:18.194 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:52:18.194 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:52:18.457 [2024-12-09 10:04:25.882896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:18.457 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:18.457 10:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:19.028 nvme0n1 00:52:19.028 10:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:52:19.028 10:04:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:52:19.028 I/O size of 131072 is greater than zero copy threshold (65536). 00:52:19.028 Zero copy mechanism will not be used. 00:52:19.028 Running I/O for 2 seconds... 00:52:21.339 6094.00 IOPS, 761.75 MiB/s [2024-12-09T10:04:28.841Z] 6022.50 IOPS, 752.81 MiB/s 00:52:21.339 Latency(us) 00:52:21.339 [2024-12-09T10:04:28.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:21.339 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:52:21.339 nvme0n1 : 2.00 6019.65 752.46 0.00 0.00 2652.05 2144.81 8281.37 00:52:21.339 [2024-12-09T10:04:28.841Z] =================================================================================================================== 00:52:21.339 [2024-12-09T10:04:28.841Z] Total : 6019.65 752.46 0.00 0.00 2652.05 2144.81 8281.37 00:52:21.339 { 00:52:21.339 "results": [ 00:52:21.339 { 00:52:21.339 "job": "nvme0n1", 00:52:21.339 "core_mask": "0x2", 00:52:21.339 "workload": "randwrite", 00:52:21.339 "status": "finished", 00:52:21.339 "queue_depth": 16, 00:52:21.339 "io_size": 131072, 00:52:21.339 "runtime": 2.004102, 00:52:21.339 "iops": 6019.653690281233, 00:52:21.339 "mibps": 752.4567112851541, 00:52:21.339 "io_failed": 0, 00:52:21.339 "io_timeout": 0, 00:52:21.339 "avg_latency_us": 2652.0482662165423, 00:52:21.339 "min_latency_us": 2144.8145454545456, 00:52:21.339 "max_latency_us": 8281.367272727273 00:52:21.339 } 00:52:21.339 ], 00:52:21.339 "core_count": 1 00:52:21.339 } 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:52:21.339 | select(.opcode=="crc32c") 00:52:21.339 | "\(.module_name) \(.executed)"' 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80201 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80201 ']' 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80201 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80201 00:52:21.339 killing process with pid 80201 00:52:21.339 Received shutdown signal, test time was about 2.000000 seconds 00:52:21.339 00:52:21.339 Latency(us) 00:52:21.339 [2024-12-09T10:04:28.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:21.339 [2024-12-09T10:04:28.841Z] =================================================================================================================== 00:52:21.339 [2024-12-09T10:04:28.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80201' 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80201 00:52:21.339 10:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80201 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80015 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80015 ']' 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80015 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80015 00:52:21.598 killing process with pid 80015 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80015' 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80015 00:52:21.598 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80015 00:52:21.857 ************************************ 00:52:21.857 END TEST nvmf_digest_clean 00:52:21.857 ************************************ 00:52:21.857 00:52:21.857 real 0m16.200s 00:52:21.857 user 0m32.224s 00:52:21.857 sys 0m4.424s 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:52:21.857 ************************************ 00:52:21.857 START TEST nvmf_digest_error 00:52:21.857 ************************************ 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80276 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80276 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80276 ']' 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:21.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:21.857 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:21.857 [2024-12-09 10:04:29.335329] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:21.857 [2024-12-09 10:04:29.336114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:22.115 [2024-12-09 10:04:29.487804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:22.115 [2024-12-09 10:04:29.518658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:22.115 [2024-12-09 10:04:29.518717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:22.115 [2024-12-09 10:04:29.518729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:22.115 [2024-12-09 10:04:29.518737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:22.115 [2024-12-09 10:04:29.518744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:22.115 [2024-12-09 10:04:29.519069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:22.115 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:22.115 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:52:22.115 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:22.115 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:22.115 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:22.115 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:22.115 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:52:22.115 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.115 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:22.115 [2024-12-09 10:04:29.615459] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:22.374 [2024-12-09 10:04:29.652367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:22.374 null0 00:52:22.374 [2024-12-09 10:04:29.687853] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:22.374 [2024-12-09 10:04:29.712018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80301 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80301 /var/tmp/bperf.sock 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80301 ']' 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:52:22.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:22.374 10:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:22.374 [2024-12-09 10:04:29.764639] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:22.374 [2024-12-09 10:04:29.764726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80301 ] 00:52:22.636 [2024-12-09 10:04:29.919340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:22.636 [2024-12-09 10:04:29.957585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:22.636 [2024-12-09 10:04:29.990316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:22.636 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:22.636 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:52:22.636 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:52:22.636 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:52:22.894 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:52:22.894 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.894 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:22.894 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.894 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:22.894 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:23.459 nvme0n1 00:52:23.459 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:52:23.459 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.459 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:23.459 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.459 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:52:23.459 10:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:52:23.459 Running I/O for 2 seconds... 00:52:23.459 [2024-12-09 10:04:30.894601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.460 [2024-12-09 10:04:30.894827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.460 [2024-12-09 10:04:30.894995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.460 [2024-12-09 10:04:30.912943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.460 [2024-12-09 10:04:30.913170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.460 [2024-12-09 10:04:30.913192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.460 [2024-12-09 10:04:30.931040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.460 [2024-12-09 10:04:30.931081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.460 [2024-12-09 10:04:30.931112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.460 [2024-12-09 10:04:30.948761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.460 [2024-12-09 10:04:30.948942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.460 [2024-12-09 10:04:30.948981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.718 [2024-12-09 10:04:30.966996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.718 [2024-12-09 10:04:30.967215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.718 [2024-12-09 10:04:30.967433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.718 [2024-12-09 10:04:30.985216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.718 [2024-12-09 10:04:30.985423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.718 [2024-12-09 10:04:30.985612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.718 [2024-12-09 10:04:31.003300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.718 [2024-12-09 10:04:31.003507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.718 [2024-12-09 10:04:31.003637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.718 [2024-12-09 10:04:31.021313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.718 [2024-12-09 10:04:31.021496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.718 [2024-12-09 10:04:31.021623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.718 [2024-12-09 10:04:31.039392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.718 [2024-12-09 10:04:31.039577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.718 [2024-12-09 10:04:31.039773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.718 [2024-12-09 10:04:31.057564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.718 [2024-12-09 10:04:31.057760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.718 [2024-12-09 10:04:31.057944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.718 [2024-12-09 10:04:31.077398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.718 [2024-12-09 10:04:31.077589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.718 [2024-12-09 10:04:31.077781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.718 [2024-12-09 10:04:31.096284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.718 [2024-12-09 10:04:31.096447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.719 [2024-12-09 10:04:31.096467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.719 [2024-12-09 10:04:31.114575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.719 [2024-12-09 10:04:31.114617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.719 [2024-12-09 10:04:31.114649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.719 [2024-12-09 10:04:31.132161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.719 [2024-12-09 10:04:31.132201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.719 [2024-12-09 10:04:31.132216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.719 [2024-12-09 10:04:31.149634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.719 [2024-12-09 10:04:31.149689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.719 [2024-12-09 10:04:31.149720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.719 [2024-12-09 10:04:31.167136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.719 [2024-12-09 10:04:31.167174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.719 [2024-12-09 10:04:31.167188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.719 [2024-12-09 10:04:31.184884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.719 [2024-12-09 10:04:31.184925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.719 [2024-12-09 10:04:31.184957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.719 [2024-12-09 10:04:31.202585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.719 [2024-12-09 10:04:31.202762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.719 [2024-12-09 10:04:31.202782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.977 [2024-12-09 10:04:31.220452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.220493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.220509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.238216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.238254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.238285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.255849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.255888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.255919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.273742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.273901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.273919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.291765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.291805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.291836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.309571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.309611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.309626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.327353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.327394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.327409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.345124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.345165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.345180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.362835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.362872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.362886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.380569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.380752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.380903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.398882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.399080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.399224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.417194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.417380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.417509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.435497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.435685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.435824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.453861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.454059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.454234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:23.978 [2024-12-09 10:04:31.472254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:23.978 [2024-12-09 10:04:31.472437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:23.978 [2024-12-09 10:04:31.472561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.237 [2024-12-09 10:04:31.490426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.237 [2024-12-09 10:04:31.490467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.237 [2024-12-09 10:04:31.490483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.508331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.508370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.508384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.526236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.526292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.526306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.543925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.543974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.543990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.561614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.561652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.561665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.579297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.579336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.579350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.597161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.597200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.597214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.614805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.614842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.614857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.632518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.632557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.632570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.650379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.650433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.650463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.668215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.668253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.668267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.685866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.685905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.685918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.703470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.703522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.703536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.238 [2024-12-09 10:04:31.721254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.238 [2024-12-09 10:04:31.721308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.238 [2024-12-09 10:04:31.721322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.498 [2024-12-09 10:04:31.739075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.498 [2024-12-09 10:04:31.739128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.498 [2024-12-09 10:04:31.739142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.498 [2024-12-09 10:04:31.757004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.498 [2024-12-09 10:04:31.757056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.498 [2024-12-09 10:04:31.757070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.498 [2024-12-09 10:04:31.774856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.774894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.774908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.792531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.792569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.792582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.810275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.810315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.810329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.828171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.828210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.828224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.845941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.845987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.846002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.863718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.863757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.863770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 14042.00 IOPS, 54.85 MiB/s [2024-12-09T10:04:32.001Z] [2024-12-09 10:04:31.883219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.883273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.883287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.900956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.901015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.901029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.918668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.918721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.918734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.936433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.936470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.936483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.954157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.954195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.954209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.972840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.972887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.972903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.499 [2024-12-09 10:04:31.990559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.499 [2024-12-09 10:04:31.990601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.499 [2024-12-09 10:04:31.990616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.758 [2024-12-09 10:04:32.008479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.758 [2024-12-09 10:04:32.008537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.758 [2024-12-09 10:04:32.008551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.758 [2024-12-09 10:04:32.033946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.758 [2024-12-09 10:04:32.033996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.758 [2024-12-09 10:04:32.034011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.051748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.051788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.051802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.069440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.069479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.069493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.087176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.087220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.087238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.105043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.105086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.105101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.122810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.122854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.122869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.140949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.141002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.141018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.159002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.159041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.159055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.176769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.176813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.176828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.194673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.194713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.194727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.212493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.212533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.212548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.230138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.230191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.230205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:24.759 [2024-12-09 10:04:32.247744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:24.759 [2024-12-09 10:04:32.247800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:24.759 [2024-12-09 10:04:32.247814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.265695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.265734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.265748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.283437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.283491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.283505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.301181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.301220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.301234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.318873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.318927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.318940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.336671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.336726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.336741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.354346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.354415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.354429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.372037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.372098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.372112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.389741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.389777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.389791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.407730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.407783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.407798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.425572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.425612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.425626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.443298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.443336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.443349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.460981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.461017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.461031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.478674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.478712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.478726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.496562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.496600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.496614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.018 [2024-12-09 10:04:32.514235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.018 [2024-12-09 10:04:32.514273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.018 [2024-12-09 10:04:32.514287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.277 [2024-12-09 10:04:32.531998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.277 [2024-12-09 10:04:32.532039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.277 [2024-12-09 10:04:32.532053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.277 [2024-12-09 10:04:32.549784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.277 [2024-12-09 10:04:32.549823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.277 [2024-12-09 10:04:32.549837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.277 [2024-12-09 10:04:32.567450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.277 [2024-12-09 10:04:32.567504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.277 [2024-12-09 10:04:32.567518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.277 [2024-12-09 10:04:32.585017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.277 [2024-12-09 10:04:32.585054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.277 [2024-12-09 10:04:32.585067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.277 [2024-12-09 10:04:32.602572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.277 [2024-12-09 10:04:32.602625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.277 [2024-12-09 10:04:32.602639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.277 [2024-12-09 10:04:32.620354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.277 [2024-12-09 10:04:32.620394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.277 [2024-12-09 10:04:32.620408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.277 [2024-12-09 10:04:32.638069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.278 [2024-12-09 10:04:32.638122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.278 [2024-12-09 10:04:32.638136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.278 [2024-12-09 10:04:32.655796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.278 [2024-12-09 10:04:32.655848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.278 [2024-12-09 10:04:32.655862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.278 [2024-12-09 10:04:32.673505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.278 [2024-12-09 10:04:32.673542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.278 [2024-12-09 10:04:32.673556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.278 [2024-12-09 10:04:32.691190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.278 [2024-12-09 10:04:32.691227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.278 [2024-12-09 10:04:32.691240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.278 [2024-12-09 10:04:32.708872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.278 [2024-12-09 10:04:32.708908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.278 [2024-12-09 10:04:32.708922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.278 [2024-12-09 10:04:32.726538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.278 [2024-12-09 10:04:32.726576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.278 [2024-12-09 10:04:32.726590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.278 [2024-12-09 10:04:32.744287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.278 [2024-12-09 10:04:32.744325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.278 [2024-12-09 10:04:32.744338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.278 [2024-12-09 10:04:32.762181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.278 [2024-12-09 10:04:32.762233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.278 [2024-12-09 10:04:32.762247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.536 [2024-12-09 10:04:32.780097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.536 [2024-12-09 10:04:32.780135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.536 [2024-12-09 10:04:32.780149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.536 [2024-12-09 10:04:32.798109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.536 [2024-12-09 10:04:32.798161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.536 [2024-12-09 10:04:32.798176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.536 [2024-12-09 10:04:32.815817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.536 [2024-12-09 10:04:32.815854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.536 [2024-12-09 10:04:32.815868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.536 [2024-12-09 10:04:32.833646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.537 [2024-12-09 10:04:32.833686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.537 [2024-12-09 10:04:32.833699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.537 [2024-12-09 10:04:32.851597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.537 [2024-12-09 10:04:32.851651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.537 [2024-12-09 10:04:32.851681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.537 14105.50 IOPS, 55.10 MiB/s [2024-12-09T10:04:33.039Z] [2024-12-09 10:04:32.868981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc81b50) 00:52:25.537 [2024-12-09 10:04:32.869025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:25.537 [2024-12-09 10:04:32.869039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:25.537 00:52:25.537 Latency(us) 00:52:25.537 [2024-12-09T10:04:33.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:25.537 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:52:25.537 nvme0n1 : 2.01 14124.22 55.17 0.00 0.00 9054.00 8460.10 34317.03 00:52:25.537 [2024-12-09T10:04:33.039Z] =================================================================================================================== 00:52:25.537 [2024-12-09T10:04:33.039Z] Total : 14124.22 55.17 0.00 0.00 9054.00 8460.10 34317.03 00:52:25.537 { 00:52:25.537 "results": [ 00:52:25.537 { 00:52:25.537 "job": "nvme0n1", 00:52:25.537 "core_mask": "0x2", 00:52:25.537 "workload": "randread", 00:52:25.537 "status": "finished", 00:52:25.537 "queue_depth": 128, 00:52:25.537 "io_size": 4096, 00:52:25.537 "runtime": 2.006412, 00:52:25.537 "iops": 14124.217757868275, 00:52:25.537 "mibps": 55.17272561667295, 00:52:25.537 "io_failed": 0, 00:52:25.537 "io_timeout": 0, 00:52:25.537 "avg_latency_us": 9054.000991887184, 00:52:25.537 "min_latency_us": 8460.101818181818, 00:52:25.537 "max_latency_us": 34317.03272727273 00:52:25.537 } 00:52:25.537 ], 00:52:25.537 "core_count": 1 00:52:25.537 } 00:52:25.537 10:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:52:25.537 10:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:52:25.537 | .driver_specific 00:52:25.537 | .nvme_error 00:52:25.537 | .status_code 00:52:25.537 | .command_transient_transport_error' 00:52:25.537 10:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:52:25.537 10:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 111 > 0 )) 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80301 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80301 ']' 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80301 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80301 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:25.796 killing process with pid 80301 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80301' 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80301 00:52:25.796 Received shutdown signal, test time was about 2.000000 seconds 00:52:25.796 00:52:25.796 Latency(us) 00:52:25.796 [2024-12-09T10:04:33.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:25.796 [2024-12-09T10:04:33.298Z] =================================================================================================================== 00:52:25.796 [2024-12-09T10:04:33.298Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:25.796 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80301 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80353 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80353 /var/tmp/bperf.sock 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80353 ']' 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:26.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:26.055 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:26.055 [2024-12-09 10:04:33.460734] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:26.055 [2024-12-09 10:04:33.460823] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80353 ] 00:52:26.055 I/O size of 131072 is greater than zero copy threshold (65536). 00:52:26.055 Zero copy mechanism will not be used. 00:52:26.314 [2024-12-09 10:04:33.610200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:26.314 [2024-12-09 10:04:33.642571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:26.314 [2024-12-09 10:04:33.671491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:26.314 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:26.314 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:52:26.314 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:52:26.314 10:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:52:26.572 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:52:26.572 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.572 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:26.572 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.572 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:26.572 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:26.867 nvme0n1 00:52:26.867 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:52:26.867 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.867 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:27.143 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.143 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:52:27.143 10:04:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:52:27.143 I/O size of 131072 is greater than zero copy threshold (65536). 00:52:27.143 Zero copy mechanism will not be used. 00:52:27.143 Running I/O for 2 seconds... 00:52:27.143 [2024-12-09 10:04:34.513263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.513321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.513339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.517832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.517876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.517892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.522314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.522355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.522370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.526813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.526854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.526870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.531258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.531299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.531314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.535703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.535745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.535760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.540114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.540155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.540170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.544637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.544677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.544708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.549221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.549260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.549275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.553632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.553672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.553687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.558112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.558152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.558167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.562519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.562559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.562574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.567004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.567043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.567059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.571448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.143 [2024-12-09 10:04:34.571490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.143 [2024-12-09 10:04:34.571504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.143 [2024-12-09 10:04:34.576104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.576147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.576162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.580648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.580691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.580706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.585168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.585208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.585223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.589667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.589707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.589738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.594198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.594237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.594268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.598656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.598696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.598711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.603138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.603176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.603207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.607583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.607622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.607653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.612132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.612171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.612186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.616613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.616652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.616684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.621121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.621159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.621190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.625560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.625600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.625615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.630072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.630109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.630140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.634581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.634620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.634651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.144 [2024-12-09 10:04:34.639097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.144 [2024-12-09 10:04:34.639135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.144 [2024-12-09 10:04:34.639150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.643541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.643581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.643596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.648102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.648142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.648157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.652587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.652626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.652656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.657150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.657189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.657204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.661532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.661572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.661587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.666030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.666069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.666100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.670554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.670593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.670625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.675192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.675229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.675259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.679782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.679823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.679838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.684335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.684375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.684389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.688879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.688921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.688935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.693301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.693341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.693355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.697794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.697834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.697848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.702271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.702311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.702326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.706741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.706781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.706795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.711274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.711313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.404 [2024-12-09 10:04:34.711344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.404 [2024-12-09 10:04:34.715793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.404 [2024-12-09 10:04:34.715833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.715849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.720332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.720372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.720387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.724785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.724825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.724840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.729254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.729294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.729309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.733800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.733841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.733856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.738255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.738296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.738310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.742731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.742771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.742787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.747212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.747252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.747267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.751695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.751736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.751751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.756207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.756248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.756262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.760607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.760647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.760661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.765039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.765078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.765093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.769516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.769554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.769585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.774048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.774087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.774102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.778477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.778517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.778532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.782915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.782970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.782987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.787369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.787544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.787565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.792031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.792079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.792095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.796501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.796541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.796556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.800919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.800973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.800989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.805336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.805376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.805391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.809785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.809826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.809841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.814207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.814246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.814262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.818712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.818752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.818767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.823176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.823215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.823230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.827621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.827661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.827676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.832111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.832151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.832166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.836541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.405 [2024-12-09 10:04:34.836581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.405 [2024-12-09 10:04:34.836595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.405 [2024-12-09 10:04:34.840875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.840914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.840929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.845316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.845480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.845498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.850021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.850199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.850380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.854997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.855179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.855352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.859884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.860147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.860299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.864893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.865094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.865241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.869812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.870062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.870195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.874641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.874835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.875014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.879707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.879891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.880171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.884821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.885072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.885205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.889836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.890082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.890222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.894708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.894940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.895120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.406 [2024-12-09 10:04:34.899625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.406 [2024-12-09 10:04:34.899808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.406 [2024-12-09 10:04:34.899937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.904488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.904716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.904853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.909516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.909697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.909823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.914333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.914529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.914730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.919279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.919535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.919693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.924342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.924522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.924649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.929266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.929448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.929616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.934164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.934204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.934236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.938626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.938666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.938697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.943151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.943206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.943221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.947616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.947656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.947687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.952056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.952123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.952137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.956434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.956488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.956518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.961002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.961085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.961116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.965467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.965505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.965535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.969950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.969997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.970029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.974385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.974423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.974453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.978816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.978856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.978871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.983462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.983500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.666 [2024-12-09 10:04:34.983531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.666 [2024-12-09 10:04:34.988004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.666 [2024-12-09 10:04:34.988040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:34.988080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:34.992422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:34.992477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:34.992509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:34.996948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:34.997015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:34.997030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.001368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.001568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.001587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.006007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.006045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.006075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.010490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.010529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.010560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.014948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.014996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.015027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.019340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.019378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.019409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.023830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.023868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.023899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.028319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.028487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.028506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.032904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.032943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.032984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.037354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.037409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.037440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.041874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.041915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.041930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.046338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.046379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.046394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.050752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.050792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.050806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.055246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.055286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.055301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.059702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.059742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.059757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.064212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.064251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.064265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.068684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.068724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.068739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.073136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.073176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.073190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.077578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.077621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.077636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.081950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.082000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.082015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.086372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.086541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.086560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.091031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.091085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.091100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.095499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.095541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.095556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.099984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.100020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.100033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.104393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.104433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.104448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.108875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.667 [2024-12-09 10:04:35.108915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.667 [2024-12-09 10:04:35.108930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.667 [2024-12-09 10:04:35.113347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.113508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.113527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.118136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.118315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.118494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.123112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.123291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.123457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.128077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.128258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.128439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.133050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.133231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.133394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.138020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.138199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.138364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.142893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.143086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.143242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.147890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.148089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.148257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.152823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.153015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.153165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.157884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.158074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.158220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.668 [2024-12-09 10:04:35.162781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.668 [2024-12-09 10:04:35.162969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.668 [2024-12-09 10:04:35.163147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.167676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.167860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.168024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.172629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.172809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.172991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.177568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.177748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.177925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.182493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.182674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.182851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.187456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.187636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.187803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.192405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.192585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.192751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.197290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.197470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.197624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.202201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.202376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.202536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.207068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.207244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.207407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.212020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.212212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.212373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.216928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.217131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.217311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.221861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.222056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.222193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.226733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.226916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.227080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.231606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.231788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.231932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.236483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.236664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.236947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.241452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.241494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.241509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.245895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.245936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.245950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.250378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.250418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.250449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.254858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.254899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.254930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.259368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.259408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.259438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.263792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.263833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.263847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.268265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.268305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.928 [2024-12-09 10:04:35.268319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.928 [2024-12-09 10:04:35.272783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.928 [2024-12-09 10:04:35.272838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.272868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.277445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.277501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.277516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.281990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.282055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.282087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.286472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.286529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.286560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.291192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.291232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.291246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.295655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.295726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.295741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.300202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.300242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.300256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.304731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.304787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.304802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.309342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.309399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.309413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.313796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.313851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.313882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.318358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.318397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.318427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.322821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.322877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.322907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.327325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.327363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.327394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.331689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.331745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.331759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.336186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.336226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.336240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.340669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.340725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.340756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.345220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.345291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.345305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.349640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.349710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.349739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.354147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.354200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.354230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.358652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.358707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.358737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.363184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.363223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.363255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.367614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.367670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.367684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.372011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.372074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.372089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.376484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.376538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.376568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.381071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.381129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.381143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.385513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.385552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.385567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.389925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.389974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.929 [2024-12-09 10:04:35.389989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.929 [2024-12-09 10:04:35.394362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.929 [2024-12-09 10:04:35.394401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.930 [2024-12-09 10:04:35.394415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.930 [2024-12-09 10:04:35.398815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.930 [2024-12-09 10:04:35.398854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.930 [2024-12-09 10:04:35.398869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.930 [2024-12-09 10:04:35.403327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.930 [2024-12-09 10:04:35.403367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.930 [2024-12-09 10:04:35.403388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.930 [2024-12-09 10:04:35.407781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.930 [2024-12-09 10:04:35.407821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.930 [2024-12-09 10:04:35.407835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:27.930 [2024-12-09 10:04:35.412256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.930 [2024-12-09 10:04:35.412296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.930 [2024-12-09 10:04:35.412310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:27.930 [2024-12-09 10:04:35.416710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.930 [2024-12-09 10:04:35.416750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.930 [2024-12-09 10:04:35.416764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:27.930 [2024-12-09 10:04:35.421177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.930 [2024-12-09 10:04:35.421216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.930 [2024-12-09 10:04:35.421230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:27.930 [2024-12-09 10:04:35.425604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:27.930 [2024-12-09 10:04:35.425644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:27.930 [2024-12-09 10:04:35.425659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.430045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.430087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.430101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.434484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.434523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.434538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.439064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.439118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.439133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.443579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.443617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.443631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.448024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.448071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.448086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.452465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.452500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.452513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.456921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.456970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.456986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.461340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.461377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.461391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.465783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.465822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.465836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.470278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.470318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.470332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.474807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.474864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.474878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.479423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.479478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.479493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.483987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.484027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.484042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.190 [2024-12-09 10:04:35.488447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.190 [2024-12-09 10:04:35.488488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.190 [2024-12-09 10:04:35.488503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.492915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.492967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.492983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.497350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.497388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.497402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.501785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.501825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.501839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.191 6711.00 IOPS, 838.88 MiB/s [2024-12-09T10:04:35.693Z] [2024-12-09 10:04:35.507414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.507454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.507468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.511880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.511919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.511933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.516357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.516406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.516420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.520812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.520852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.520866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.525276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.525316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.525331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.529776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.529815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.529828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.534283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.534322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.534337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.538731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.538770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.538784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.543208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.543247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.543261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.547642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.547682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.547696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.552069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.552107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.552122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.556457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.556496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.556510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.560896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.560935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.560949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.565347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.565387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.565402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.569832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.569873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.569887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.574235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.574274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.574288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.578676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.578716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.578729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.583171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.583210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.583224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.587598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.587637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.587651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.592054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.592106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.592121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.596537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.596578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.596592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.601009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.601046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.601059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.605472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.605511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.605525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.609910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.609950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.191 [2024-12-09 10:04:35.609980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.191 [2024-12-09 10:04:35.614363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.191 [2024-12-09 10:04:35.614403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.614417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.618810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.618857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.618871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.623214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.623259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.623275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.627708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.627749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.627764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.632215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.632257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.632272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.636663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.636702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.636717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.641121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.641161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.641176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.645511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.645551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.645565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.649904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.649943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.649968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.654309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.654348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.654362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.658762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.658802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.658816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.663172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.663211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.663224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.667588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.667627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.667641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.672047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.672101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.672116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.676489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.676528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.676542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.680878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.680917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.680931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.685331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.685371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.685385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.192 [2024-12-09 10:04:35.689723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.192 [2024-12-09 10:04:35.689763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.192 [2024-12-09 10:04:35.689777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.694183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.694224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.694238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.698650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.698689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.698703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.703220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.703273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.703288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.707786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.707825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.707839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.712357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.712396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.712411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.717024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.717089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.717120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.721572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.721612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.721626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.726045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.726084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.726098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.730660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.730700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.730715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.735111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.735165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.735179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.739632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.739672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.739686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.744109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.744149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.744162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.748601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.748641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.748656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.753037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.753075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.753088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.757557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.757596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.757610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.762045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.762100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.762115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.766490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.766529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.766543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.770958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.771023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.771038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.775474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.775530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.775545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.780021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.780083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.780098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.784528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.784582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.784596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.789017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.789071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.789085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.793437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.793492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.793506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.797821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.797860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.453 [2024-12-09 10:04:35.797874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.453 [2024-12-09 10:04:35.802328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.453 [2024-12-09 10:04:35.802368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.802382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.806779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.806835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.806849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.811307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.811362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.811376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.815794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.815833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.815848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.820313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.820352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.820366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.824850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.824906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.824920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.829447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.829501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.829516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.834018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.834072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.834086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.838527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.838567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.838581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.843003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.843058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.843072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.847468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.847523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.847537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.851954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.852019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.852033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.856489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.856529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.856543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.860920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.860984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.860999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.865388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.865441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.865455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.869856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.869895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.869909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.874411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.874466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.874480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.878856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.878895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.878910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.883325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.883365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.883379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.887793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.887833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.887848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.892317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.892356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.892370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.896701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.896741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.896755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.901139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.901179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.901193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.905603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.905643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.905657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.910102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.910141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.910155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.914583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.914623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.914636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.918998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.919037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.919051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.923318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.454 [2024-12-09 10:04:35.923356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.454 [2024-12-09 10:04:35.923370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.454 [2024-12-09 10:04:35.927722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.455 [2024-12-09 10:04:35.927762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.455 [2024-12-09 10:04:35.927776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.455 [2024-12-09 10:04:35.932236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.455 [2024-12-09 10:04:35.932275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.455 [2024-12-09 10:04:35.932289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.455 [2024-12-09 10:04:35.936667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.455 [2024-12-09 10:04:35.936706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.455 [2024-12-09 10:04:35.936720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.455 [2024-12-09 10:04:35.941115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.455 [2024-12-09 10:04:35.941155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.455 [2024-12-09 10:04:35.941169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.455 [2024-12-09 10:04:35.945538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.455 [2024-12-09 10:04:35.945577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.455 [2024-12-09 10:04:35.945591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.455 [2024-12-09 10:04:35.949997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.455 [2024-12-09 10:04:35.950035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.455 [2024-12-09 10:04:35.950050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.954455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.954493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.954507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.958872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.958912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.958927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.963289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.963328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.963342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.967653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.967692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.967706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.972107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.972147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.972161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.976520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.976557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.976571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.980953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.981003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.981016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.985365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.985407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.985422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.989825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.989865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.989878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.994338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.994393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.994407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:35.998912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:35.998951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:35.998980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:36.003378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:36.003417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:36.003432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:36.007816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:36.007856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:36.007870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:36.012254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:36.012293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:36.012307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:36.016708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:36.016747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:36.016762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:36.021248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:36.021303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:36.021317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:36.025694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:36.025749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.715 [2024-12-09 10:04:36.025779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.715 [2024-12-09 10:04:36.030214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.715 [2024-12-09 10:04:36.030268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.030298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.034816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.034856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.034870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.039258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.039297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.039311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.043669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.043708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.043722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.048090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.048131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.048145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.052537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.052576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.052590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.057007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.057061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.057075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.061378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.061433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.061447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.065770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.065810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.065825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.070225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.070279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.070293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.074721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.074776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.074790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.079139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.079177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.079190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.083477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.083532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.083547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.087953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.088001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.088015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.092413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.092454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.092468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.096875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.096930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.096943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.101438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.101493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.101523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.105996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.106050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.106080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.110353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.110407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.110436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.114833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.114888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.114918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.119320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.119359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.119374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.123758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.123813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.123827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.128327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.128367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.128381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.132782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.132821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.132835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.137350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.137388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.137402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.141867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.141908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.141922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.146262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.146302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.146316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.150652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.716 [2024-12-09 10:04:36.150691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.716 [2024-12-09 10:04:36.150714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.716 [2024-12-09 10:04:36.155133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.155171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.155185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.159587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.159627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.159641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.164077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.164117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.164132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.168540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.168579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.168594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.173025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.173064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.173078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.177456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.177496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.177510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.181910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.181950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.181978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.186423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.186462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.186476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.190995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.191046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.191060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.195512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.195551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.195569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.199940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.200003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.200018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.204489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.204528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.204542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.208858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.208912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.208926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.717 [2024-12-09 10:04:36.213473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.717 [2024-12-09 10:04:36.213513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.717 [2024-12-09 10:04:36.213528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.217905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.217970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.217986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.222368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.222423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.222437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.226813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.226868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.226882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.231283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.231323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.231337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.235712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.235766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.235780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.240190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.240228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.240242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.244573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.244612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.244627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.249007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.249061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.249075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.253314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.253370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.253384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.257744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.257784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.257798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.262168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.262223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.262237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.266541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.266597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.266611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.270936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.270984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.977 [2024-12-09 10:04:36.270998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.977 [2024-12-09 10:04:36.275410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.977 [2024-12-09 10:04:36.275449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.275463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.279846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.279885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.279900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.284326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.284365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.284379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.288744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.288783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.288797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.293207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.293247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.293261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.297674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.297717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.297731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.302141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.302179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.302192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.306595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.306636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.306650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.311108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.311147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.311161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.315566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.315606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.315620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.320040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.320088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.320103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.324472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.324511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.324525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.328885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.328925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.328939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.333348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.333387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.333402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.337805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.337844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.337859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.342263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.342303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.342317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.346727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.346766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.346780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.351164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.351203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.351217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.355595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.355634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.355648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.359977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.360018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.360032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.364462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.364501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.364515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.368902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.368940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.368970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.373327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.373366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.373380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.377748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.377788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.377802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.382244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.382284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.382298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.386696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.386735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.386749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.391151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.391189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.978 [2024-12-09 10:04:36.391203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.978 [2024-12-09 10:04:36.395565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.978 [2024-12-09 10:04:36.395605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.395619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.400072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.400112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.400126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.404487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.404526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.404541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.408928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.408976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.408991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.413384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.413424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.413438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.417823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.417864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.417878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.422243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.422281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.422295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.426745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.426785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.426799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.431279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.431333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.431348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.435735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.435775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.435789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.440204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.440243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.440256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.444624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.444664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.444678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.449082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.449120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.449134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.453535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.453576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.453590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.458144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.458184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.458198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.462674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.462711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.462725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.467116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.467151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.467165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.471552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.471591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.471605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:28.979 [2024-12-09 10:04:36.476046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:28.979 [2024-12-09 10:04:36.476093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:28.979 [2024-12-09 10:04:36.476107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:29.238 [2024-12-09 10:04:36.480551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:29.238 [2024-12-09 10:04:36.480590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:29.238 [2024-12-09 10:04:36.480604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:29.238 [2024-12-09 10:04:36.485004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:29.238 [2024-12-09 10:04:36.485043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:29.238 [2024-12-09 10:04:36.485057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:29.238 [2024-12-09 10:04:36.489484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:29.238 [2024-12-09 10:04:36.489539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:29.238 [2024-12-09 10:04:36.489553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:29.238 [2024-12-09 10:04:36.494018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:29.238 [2024-12-09 10:04:36.494056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:29.238 [2024-12-09 10:04:36.494070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:29.238 [2024-12-09 10:04:36.498567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:29.238 [2024-12-09 10:04:36.498622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:29.238 [2024-12-09 10:04:36.498636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:29.238 6820.00 IOPS, 852.50 MiB/s [2024-12-09T10:04:36.740Z] [2024-12-09 10:04:36.504786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1776620) 00:52:29.238 [2024-12-09 10:04:36.504827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:29.238 [2024-12-09 10:04:36.504841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:29.238 00:52:29.238 Latency(us) 00:52:29.238 [2024-12-09T10:04:36.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:29.238 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:52:29.238 nvme0n1 : 2.00 6817.97 852.25 0.00 0.00 2342.88 2070.34 13583.83 00:52:29.238 [2024-12-09T10:04:36.740Z] =================================================================================================================== 00:52:29.238 [2024-12-09T10:04:36.740Z] Total : 6817.97 852.25 0.00 0.00 2342.88 2070.34 13583.83 00:52:29.238 { 00:52:29.238 "results": [ 00:52:29.238 { 00:52:29.238 "job": "nvme0n1", 00:52:29.238 "core_mask": "0x2", 00:52:29.239 "workload": "randread", 00:52:29.239 "status": "finished", 00:52:29.239 "queue_depth": 16, 00:52:29.239 "io_size": 131072, 00:52:29.239 "runtime": 2.002943, 00:52:29.239 "iops": 6817.967361028247, 00:52:29.239 "mibps": 852.2459201285309, 00:52:29.239 "io_failed": 0, 00:52:29.239 "io_timeout": 0, 00:52:29.239 "avg_latency_us": 2342.8778612131864, 00:52:29.239 "min_latency_us": 2070.3418181818183, 00:52:29.239 "max_latency_us": 13583.825454545455 00:52:29.239 } 00:52:29.239 ], 00:52:29.239 "core_count": 1 00:52:29.239 } 00:52:29.239 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:52:29.239 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:52:29.239 | .driver_specific 00:52:29.239 | .nvme_error 00:52:29.239 | .status_code 00:52:29.239 | .command_transient_transport_error' 00:52:29.239 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:52:29.239 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 441 > 0 )) 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80353 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80353 ']' 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80353 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80353 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:29.497 killing process with pid 80353 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80353' 00:52:29.497 Received shutdown signal, test time was about 2.000000 seconds 00:52:29.497 00:52:29.497 Latency(us) 00:52:29.497 [2024-12-09T10:04:36.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:29.497 [2024-12-09T10:04:36.999Z] =================================================================================================================== 00:52:29.497 [2024-12-09T10:04:36.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80353 00:52:29.497 10:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80353 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80401 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80401 /var/tmp/bperf.sock 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80401 ']' 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:52:29.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:29.756 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:29.756 [2024-12-09 10:04:37.054195] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:29.756 [2024-12-09 10:04:37.054285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80401 ] 00:52:29.756 [2024-12-09 10:04:37.203114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:29.756 [2024-12-09 10:04:37.236018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:30.014 [2024-12-09 10:04:37.265170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:30.014 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:30.014 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:52:30.014 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:52:30.014 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:52:30.272 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:52:30.272 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.272 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:30.272 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.272 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:30.272 10:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:30.840 nvme0n1 00:52:30.840 10:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:52:30.840 10:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.840 10:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:30.840 10:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.840 10:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:52:30.840 10:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:52:30.840 Running I/O for 2 seconds... 00:52:30.840 [2024-12-09 10:04:38.219889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efb048 00:52:30.840 [2024-12-09 10:04:38.221431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:30.840 [2024-12-09 10:04:38.221489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:30.840 [2024-12-09 10:04:38.236514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efb8b8 00:52:30.840 [2024-12-09 10:04:38.237987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:30.840 [2024-12-09 10:04:38.238022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:30.840 [2024-12-09 10:04:38.253134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efc128 00:52:30.840 [2024-12-09 10:04:38.254593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:30.840 [2024-12-09 10:04:38.254647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:30.840 [2024-12-09 10:04:38.269811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efc998 00:52:30.840 [2024-12-09 10:04:38.271275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:30.840 [2024-12-09 10:04:38.271312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:30.840 [2024-12-09 10:04:38.286393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efd208 00:52:30.840 [2024-12-09 10:04:38.287806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:30.840 [2024-12-09 10:04:38.287859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:30.840 [2024-12-09 10:04:38.303063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efda78 00:52:30.840 [2024-12-09 10:04:38.304451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:30.840 [2024-12-09 10:04:38.304487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:30.840 [2024-12-09 10:04:38.319794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efe2e8 00:52:30.840 [2024-12-09 10:04:38.321182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:30.840 [2024-12-09 10:04:38.321219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:30.840 [2024-12-09 10:04:38.336363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efeb58 00:52:30.840 [2024-12-09 10:04:38.337704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:30.840 [2024-12-09 10:04:38.337741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.359931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efef90 00:52:31.099 [2024-12-09 10:04:38.362583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.362636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.376650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efeb58 00:52:31.099 [2024-12-09 10:04:38.379266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.379305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.393298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efe2e8 00:52:31.099 [2024-12-09 10:04:38.395916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.395953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.409949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efda78 00:52:31.099 [2024-12-09 10:04:38.412517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.412552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.426435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efd208 00:52:31.099 [2024-12-09 10:04:38.428985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.429019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.442920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efc998 00:52:31.099 [2024-12-09 10:04:38.445473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.445510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.459514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efc128 00:52:31.099 [2024-12-09 10:04:38.462070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.462106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.476052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efb8b8 00:52:31.099 [2024-12-09 10:04:38.478548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.478600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.492698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efb048 00:52:31.099 [2024-12-09 10:04:38.495239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.495290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.509345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016efa7d8 00:52:31.099 [2024-12-09 10:04:38.511790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.511825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.525899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef9f68 00:52:31.099 [2024-12-09 10:04:38.528351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.528502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.542882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef96f8 00:52:31.099 [2024-12-09 10:04:38.545466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.545503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.559645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef8e88 00:52:31.099 [2024-12-09 10:04:38.562114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.562149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.576334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef8618 00:52:31.099 [2024-12-09 10:04:38.578774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.578903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:31.099 [2024-12-09 10:04:38.593408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef7da8 00:52:31.099 [2024-12-09 10:04:38.595913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.099 [2024-12-09 10:04:38.596036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:31.357 [2024-12-09 10:04:38.610445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef7538 00:52:31.357 [2024-12-09 10:04:38.612926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.357 [2024-12-09 10:04:38.613061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:31.357 [2024-12-09 10:04:38.627330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef6cc8 00:52:31.357 [2024-12-09 10:04:38.629787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.357 [2024-12-09 10:04:38.629909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:31.357 [2024-12-09 10:04:38.644198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef6458 00:52:31.357 [2024-12-09 10:04:38.646571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.357 [2024-12-09 10:04:38.646712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:31.357 [2024-12-09 10:04:38.661061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef5be8 00:52:31.357 [2024-12-09 10:04:38.663384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.357 [2024-12-09 10:04:38.663508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:31.357 [2024-12-09 10:04:38.677954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef5378 00:52:31.357 [2024-12-09 10:04:38.680285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.357 [2024-12-09 10:04:38.680410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:31.357 [2024-12-09 10:04:38.694859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef4b08 00:52:31.357 [2024-12-09 10:04:38.697195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.357 [2024-12-09 10:04:38.697328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:31.358 [2024-12-09 10:04:38.711732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef4298 00:52:31.358 [2024-12-09 10:04:38.714062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.358 [2024-12-09 10:04:38.714189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:31.358 [2024-12-09 10:04:38.728807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef3a28 00:52:31.358 [2024-12-09 10:04:38.731123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.358 [2024-12-09 10:04:38.731228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:31.358 [2024-12-09 10:04:38.745671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef31b8 00:52:31.358 [2024-12-09 10:04:38.747937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.358 [2024-12-09 10:04:38.748050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:31.358 [2024-12-09 10:04:38.762471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef2948 00:52:31.358 [2024-12-09 10:04:38.764682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.358 [2024-12-09 10:04:38.764720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:31.358 [2024-12-09 10:04:38.779092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef20d8 00:52:31.358 [2024-12-09 10:04:38.781209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.358 [2024-12-09 10:04:38.781247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:31.358 [2024-12-09 10:04:38.795643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef1868 00:52:31.358 [2024-12-09 10:04:38.797744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.358 [2024-12-09 10:04:38.797781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:31.358 [2024-12-09 10:04:38.812183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef0ff8 00:52:31.358 [2024-12-09 10:04:38.814249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.358 [2024-12-09 10:04:38.814286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:31.358 [2024-12-09 10:04:38.828656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef0788 00:52:31.358 [2024-12-09 10:04:38.830704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.358 [2024-12-09 10:04:38.830739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:31.358 [2024-12-09 10:04:38.845213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eeff18 00:52:31.358 [2024-12-09 10:04:38.847237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.358 [2024-12-09 10:04:38.847272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:31.616 [2024-12-09 10:04:38.861690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eef6a8 00:52:31.616 [2024-12-09 10:04:38.863691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.616 [2024-12-09 10:04:38.863726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:31.616 [2024-12-09 10:04:38.878215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eeee38 00:52:31.616 [2024-12-09 10:04:38.880212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.616 [2024-12-09 10:04:38.880249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:31.616 [2024-12-09 10:04:38.894816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eee5c8 00:52:31.616 [2024-12-09 10:04:38.896818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.616 [2024-12-09 10:04:38.896852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:31.616 [2024-12-09 10:04:38.911390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eedd58 00:52:31.616 [2024-12-09 10:04:38.913346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.616 [2024-12-09 10:04:38.913380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:31.616 [2024-12-09 10:04:38.927982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eed4e8 00:52:31.617 [2024-12-09 10:04:38.929901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:38.929937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:38.944482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eecc78 00:52:31.617 [2024-12-09 10:04:38.946385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:38.946420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:38.961001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eec408 00:52:31.617 [2024-12-09 10:04:38.962876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:38.962913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:38.977509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eebb98 00:52:31.617 [2024-12-09 10:04:38.979374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:38.979409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:38.994021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eeb328 00:52:31.617 [2024-12-09 10:04:38.995851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:38.995887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:39.010525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eeaab8 00:52:31.617 [2024-12-09 10:04:39.012355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:39.012392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:39.027003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eea248 00:52:31.617 [2024-12-09 10:04:39.028804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:39.028840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:39.043518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee99d8 00:52:31.617 [2024-12-09 10:04:39.045320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:39.045355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:39.060025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee9168 00:52:31.617 [2024-12-09 10:04:39.061779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:39.061815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:39.076522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee88f8 00:52:31.617 [2024-12-09 10:04:39.078261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:39.078295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:39.093007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee8088 00:52:31.617 [2024-12-09 10:04:39.094712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:39.094749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:31.617 [2024-12-09 10:04:39.109536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee7818 00:52:31.617 [2024-12-09 10:04:39.111240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.617 [2024-12-09 10:04:39.111275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:31.875 [2024-12-09 10:04:39.126062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee6fa8 00:52:31.875 [2024-12-09 10:04:39.127735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.875 [2024-12-09 10:04:39.127770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:31.875 [2024-12-09 10:04:39.142578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee6738 00:52:31.876 [2024-12-09 10:04:39.144244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.144279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.159090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee5ec8 00:52:31.876 [2024-12-09 10:04:39.160726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.160762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.175646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee5658 00:52:31.876 [2024-12-09 10:04:39.177279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.177317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.192148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee4de8 00:52:31.876 [2024-12-09 10:04:39.193747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.193784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:31.876 15182.00 IOPS, 59.30 MiB/s [2024-12-09T10:04:39.378Z] [2024-12-09 10:04:39.208660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee4578 00:52:31.876 [2024-12-09 10:04:39.210248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.210283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.225155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee3d08 00:52:31.876 [2024-12-09 10:04:39.226701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.226738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.241723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee3498 00:52:31.876 [2024-12-09 10:04:39.243261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.243297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.258270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee2c28 00:52:31.876 [2024-12-09 10:04:39.259785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.259822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.274794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee23b8 00:52:31.876 [2024-12-09 10:04:39.276303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.276337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.291307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee1b48 00:52:31.876 [2024-12-09 10:04:39.292781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.292819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.307776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee12d8 00:52:31.876 [2024-12-09 10:04:39.309248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.309283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.324305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee0a68 00:52:31.876 [2024-12-09 10:04:39.325732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.325771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.340779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee01f8 00:52:31.876 [2024-12-09 10:04:39.342195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.342237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.357292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016edf988 00:52:31.876 [2024-12-09 10:04:39.358682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.358718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:31.876 [2024-12-09 10:04:39.373797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016edf118 00:52:31.876 [2024-12-09 10:04:39.375175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:31.876 [2024-12-09 10:04:39.375211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.390250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ede8a8 00:52:32.135 [2024-12-09 10:04:39.391637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.391673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.406887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ede038 00:52:32.135 [2024-12-09 10:04:39.408298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.408335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.429843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ede038 00:52:32.135 [2024-12-09 10:04:39.432598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.432662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.446348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ede8a8 00:52:32.135 [2024-12-09 10:04:39.449029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.449079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.463092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016edf118 00:52:32.135 [2024-12-09 10:04:39.465658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.465709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.479439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016edf988 00:52:32.135 [2024-12-09 10:04:39.482062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.482113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.495703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee01f8 00:52:32.135 [2024-12-09 10:04:39.498243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.498294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.511770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee0a68 00:52:32.135 [2024-12-09 10:04:39.514280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.514331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.528259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee12d8 00:52:32.135 [2024-12-09 10:04:39.530744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.530779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.544772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee1b48 00:52:32.135 [2024-12-09 10:04:39.547251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.547286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.561254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee23b8 00:52:32.135 [2024-12-09 10:04:39.563692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.563728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.577721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee2c28 00:52:32.135 [2024-12-09 10:04:39.580155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.580191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.594364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee3498 00:52:32.135 [2024-12-09 10:04:39.596861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.596898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.611226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee3d08 00:52:32.135 [2024-12-09 10:04:39.613666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.613703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:32.135 [2024-12-09 10:04:39.627950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee4578 00:52:32.135 [2024-12-09 10:04:39.630335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.135 [2024-12-09 10:04:39.630387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:32.394 [2024-12-09 10:04:39.644593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee4de8 00:52:32.394 [2024-12-09 10:04:39.646988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.394 [2024-12-09 10:04:39.647039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:32.394 [2024-12-09 10:04:39.660992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee5658 00:52:32.394 [2024-12-09 10:04:39.663312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.394 [2024-12-09 10:04:39.663349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:32.394 [2024-12-09 10:04:39.677382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee5ec8 00:52:32.394 [2024-12-09 10:04:39.679658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.394 [2024-12-09 10:04:39.679710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:32.394 [2024-12-09 10:04:39.693742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee6738 00:52:32.394 [2024-12-09 10:04:39.696076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.394 [2024-12-09 10:04:39.696111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:32.394 [2024-12-09 10:04:39.710221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee6fa8 00:52:32.394 [2024-12-09 10:04:39.712492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.394 [2024-12-09 10:04:39.712528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:32.394 [2024-12-09 10:04:39.726539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee7818 00:52:32.394 [2024-12-09 10:04:39.728819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.394 [2024-12-09 10:04:39.728874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:32.394 [2024-12-09 10:04:39.743070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee8088 00:52:32.394 [2024-12-09 10:04:39.745284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.394 [2024-12-09 10:04:39.745338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:32.394 [2024-12-09 10:04:39.759149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee88f8 00:52:32.395 [2024-12-09 10:04:39.761398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.395 [2024-12-09 10:04:39.761449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:32.395 [2024-12-09 10:04:39.775706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee9168 00:52:32.395 [2024-12-09 10:04:39.777945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.395 [2024-12-09 10:04:39.778005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:32.395 [2024-12-09 10:04:39.791966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ee99d8 00:52:32.395 [2024-12-09 10:04:39.794205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.395 [2024-12-09 10:04:39.794256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:32.395 [2024-12-09 10:04:39.808236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eea248 00:52:32.395 [2024-12-09 10:04:39.810353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.395 [2024-12-09 10:04:39.810404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:32.395 [2024-12-09 10:04:39.824494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eeaab8 00:52:32.395 [2024-12-09 10:04:39.826589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.395 [2024-12-09 10:04:39.826638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:32.395 [2024-12-09 10:04:39.841387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eeb328 00:52:32.395 [2024-12-09 10:04:39.843578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.395 [2024-12-09 10:04:39.843629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:32.395 [2024-12-09 10:04:39.857696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eebb98 00:52:32.395 [2024-12-09 10:04:39.859842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.395 [2024-12-09 10:04:39.859892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:32.395 [2024-12-09 10:04:39.874029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eec408 00:52:32.395 [2024-12-09 10:04:39.876129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.395 [2024-12-09 10:04:39.876165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:32.395 [2024-12-09 10:04:39.890500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eecc78 00:52:32.395 [2024-12-09 10:04:39.892566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.395 [2024-12-09 10:04:39.892601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:39.907087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eed4e8 00:52:32.654 [2024-12-09 10:04:39.909128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:39.909163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:39.923537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eedd58 00:52:32.654 [2024-12-09 10:04:39.925538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:39.925572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:39.940018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eee5c8 00:52:32.654 [2024-12-09 10:04:39.942005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:39.942036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:39.956483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eeee38 00:52:32.654 [2024-12-09 10:04:39.958436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:39.958472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:39.972925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eef6a8 00:52:32.654 [2024-12-09 10:04:39.974862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:39.974896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:39.989387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016eeff18 00:52:32.654 [2024-12-09 10:04:39.991300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:39.991335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:40.005839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef0788 00:52:32.654 [2024-12-09 10:04:40.007735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:40.007770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:40.022340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef0ff8 00:52:32.654 [2024-12-09 10:04:40.024735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:40.024781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:40.039515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef1868 00:52:32.654 [2024-12-09 10:04:40.041415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:40.041453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:40.056107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef20d8 00:52:32.654 [2024-12-09 10:04:40.057935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:40.057979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:40.072618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef2948 00:52:32.654 [2024-12-09 10:04:40.074431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:40.074465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:40.089187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef31b8 00:52:32.654 [2024-12-09 10:04:40.090953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:40.091012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:40.105541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef3a28 00:52:32.654 [2024-12-09 10:04:40.107357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:40.107408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:40.122101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef4298 00:52:32.654 [2024-12-09 10:04:40.123868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:40.123905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:32.654 [2024-12-09 10:04:40.138753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef4b08 00:52:32.654 [2024-12-09 10:04:40.140510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.654 [2024-12-09 10:04:40.140560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:32.913 [2024-12-09 10:04:40.155326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef5378 00:52:32.913 [2024-12-09 10:04:40.157067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.913 [2024-12-09 10:04:40.157104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:32.913 [2024-12-09 10:04:40.171740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef5be8 00:52:32.913 [2024-12-09 10:04:40.173440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.913 [2024-12-09 10:04:40.173476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:32.913 [2024-12-09 10:04:40.188136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef6458 00:52:32.913 [2024-12-09 10:04:40.189783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.913 [2024-12-09 10:04:40.189836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:32.913 [2024-12-09 10:04:40.204932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7b70) with pdu=0x200016ef6cc8 00:52:32.913 [2024-12-09 10:04:40.207053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:32.913 [2024-12-09 10:04:40.207090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:32.913 15244.50 IOPS, 59.55 MiB/s 00:52:32.913 Latency(us) 00:52:32.913 [2024-12-09T10:04:40.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:32.913 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:52:32.913 nvme0n1 : 2.01 15286.01 59.71 0.00 0.00 8366.10 2338.44 31695.59 00:52:32.913 [2024-12-09T10:04:40.415Z] =================================================================================================================== 00:52:32.913 [2024-12-09T10:04:40.415Z] Total : 15286.01 59.71 0.00 0.00 8366.10 2338.44 31695.59 00:52:32.913 { 00:52:32.913 "results": [ 00:52:32.913 { 00:52:32.913 "job": "nvme0n1", 00:52:32.913 "core_mask": "0x2", 00:52:32.913 "workload": "randwrite", 00:52:32.913 "status": "finished", 00:52:32.913 "queue_depth": 128, 00:52:32.913 "io_size": 4096, 00:52:32.913 "runtime": 2.011186, 00:52:32.913 "iops": 15286.005371954658, 00:52:32.913 "mibps": 59.71095848419788, 00:52:32.913 "io_failed": 0, 00:52:32.913 "io_timeout": 0, 00:52:32.913 "avg_latency_us": 8366.097774571004, 00:52:32.913 "min_latency_us": 2338.4436363636364, 00:52:32.913 "max_latency_us": 31695.592727272728 00:52:32.913 } 00:52:32.913 ], 00:52:32.913 "core_count": 1 00:52:32.913 } 00:52:32.913 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:52:32.913 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:52:32.913 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:52:32.913 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:52:32.913 | .driver_specific 00:52:32.913 | .nvme_error 00:52:32.913 | .status_code 00:52:32.913 | .command_transient_transport_error' 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80401 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80401 ']' 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80401 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80401 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:33.173 killing process with pid 80401 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80401' 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80401 00:52:33.173 Received shutdown signal, test time was about 2.000000 seconds 00:52:33.173 00:52:33.173 Latency(us) 00:52:33.173 [2024-12-09T10:04:40.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:33.173 [2024-12-09T10:04:40.675Z] =================================================================================================================== 00:52:33.173 [2024-12-09T10:04:40.675Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:33.173 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80401 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80454 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80454 /var/tmp/bperf.sock 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80454 ']' 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:52:33.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:33.432 10:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:33.432 [2024-12-09 10:04:40.815128] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:33.432 [2024-12-09 10:04:40.815231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80454 ] 00:52:33.433 I/O size of 131072 is greater than zero copy threshold (65536). 00:52:33.433 Zero copy mechanism will not be used. 00:52:33.691 [2024-12-09 10:04:40.966144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:33.691 [2024-12-09 10:04:40.998777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:33.691 [2024-12-09 10:04:41.027966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:33.691 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:33.691 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:52:33.691 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:52:33.691 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:52:33.950 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:52:33.950 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.950 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:33.950 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.950 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:33.950 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:52:34.209 nvme0n1 00:52:34.468 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:52:34.468 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.468 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:34.468 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.468 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:52:34.468 10:04:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:52:34.468 I/O size of 131072 is greater than zero copy threshold (65536). 00:52:34.468 Zero copy mechanism will not be used. 00:52:34.468 Running I/O for 2 seconds... 00:52:34.468 [2024-12-09 10:04:41.858812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.468 [2024-12-09 10:04:41.858913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.468 [2024-12-09 10:04:41.858944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.468 [2024-12-09 10:04:41.864202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.468 [2024-12-09 10:04:41.864291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.468 [2024-12-09 10:04:41.864316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.468 [2024-12-09 10:04:41.869347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.468 [2024-12-09 10:04:41.869426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.468 [2024-12-09 10:04:41.869451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.468 [2024-12-09 10:04:41.874575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.468 [2024-12-09 10:04:41.874649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.468 [2024-12-09 10:04:41.874673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.468 [2024-12-09 10:04:41.879797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.468 [2024-12-09 10:04:41.879883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.468 [2024-12-09 10:04:41.879906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.468 [2024-12-09 10:04:41.885027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.468 [2024-12-09 10:04:41.885134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.468 [2024-12-09 10:04:41.885160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.890220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.890332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.890367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.895382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.895456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.895479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.900571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.900647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.900670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.905702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.905778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.905802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.910865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.910953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.910991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.915974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.916073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.916096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.921164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.921270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.921292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.926374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.926449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.926473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.931562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.931654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.931677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.936825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.936915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.936939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.942025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.942120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.942143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.947173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.947247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.947270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.952293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.952373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.952396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.957475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.957552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.957575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.962677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.962770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.962793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.469 [2024-12-09 10:04:41.967859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.469 [2024-12-09 10:04:41.967931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.469 [2024-12-09 10:04:41.967968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.729 [2024-12-09 10:04:41.973045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.729 [2024-12-09 10:04:41.973119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.729 [2024-12-09 10:04:41.973142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.729 [2024-12-09 10:04:41.978198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.729 [2024-12-09 10:04:41.978306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.729 [2024-12-09 10:04:41.978330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.729 [2024-12-09 10:04:41.983351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.729 [2024-12-09 10:04:41.983437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.729 [2024-12-09 10:04:41.983461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.729 [2024-12-09 10:04:41.988509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.729 [2024-12-09 10:04:41.988625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.729 [2024-12-09 10:04:41.988659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.729 [2024-12-09 10:04:41.993701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.729 [2024-12-09 10:04:41.993780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:41.993803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:41.998804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:41.998888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:41.998911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.003912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.004016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.004040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.009090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.009195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.009218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.014275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.014376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.014400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.019445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.019526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.019549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.024616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.024692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.024715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.029840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.029919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.029943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.034997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.035104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.035128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.040178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.040264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.040287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.045353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.045429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.045452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.050553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.050643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.050667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.055716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.055807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.055829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.060947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.061074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.061103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.066185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.066282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.066318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.071280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.071370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.071393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.076468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.076542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.076565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.081622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.081695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.081718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.086710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.086799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.086821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.091893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.092047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.092104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.097107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.097212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.097240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.102271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.102369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.102393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.107428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.107516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.107539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.112737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.112826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.112848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.117998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.118122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.118157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.123285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.123380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.123404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.128527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.128625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.128648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.133762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.730 [2024-12-09 10:04:42.133852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.730 [2024-12-09 10:04:42.133874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.730 [2024-12-09 10:04:42.139012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.139124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.139156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.144196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.144272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.144295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.149308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.149398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.149421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.154419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.154508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.154531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.159601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.159719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.159742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.164765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.164864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.164887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.169885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.170021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.170046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.175120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.175210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.175239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.180263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.180348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.180370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.185396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.185489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.185512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.190550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.190651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.190673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.195757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.195875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.195908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.200928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.201016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.201039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.206077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.206189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.206223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.211283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.211373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.211396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.216391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.216513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.216535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.221611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.221739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.221762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.731 [2024-12-09 10:04:42.226957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.731 [2024-12-09 10:04:42.227046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.731 [2024-12-09 10:04:42.227069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.232221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.232308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.232332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.237329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.237421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.237447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.242484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.242559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.242581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.247619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.247710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.247740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.252816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.252890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.252913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.258027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.258114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.258137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.263168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.263257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.263280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.268484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.268597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.268621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.273681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.273756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.273779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.278860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.278936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.278959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.284113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.284186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.284209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.289273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.289357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.289381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.294454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.294535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.294559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.299677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.299787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.299813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.304909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.304999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.305023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.310158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.310235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.993 [2024-12-09 10:04:42.310258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.993 [2024-12-09 10:04:42.315311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.993 [2024-12-09 10:04:42.315390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.315413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.320457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.320559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.320581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.325619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.325693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.325717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.330831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.330904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.330927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.336015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.336104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.336127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.341127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.341229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.341252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.346304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.346380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.346404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.351455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.351530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.351553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.356636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.356724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.356747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.361767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.361840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.361863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.366920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.367029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.367052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.372081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.372167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.372191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.377187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.377264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.377287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.382305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.382394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.382423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.387429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.387515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.387538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.392582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.392657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.392680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.397730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.397804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.397827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.402889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.402977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.403001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.408080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.408165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.408188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.413189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.413266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.413288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.418327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.418402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.418424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.423441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.423533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.423556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.428596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.428672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.428695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.433810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.433886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.433910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.438987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.439064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.439089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.444183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.444270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.444292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.449327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.449409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.449432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.454443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.994 [2024-12-09 10:04:42.454534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.994 [2024-12-09 10:04:42.454562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.994 [2024-12-09 10:04:42.459539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.995 [2024-12-09 10:04:42.459616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.995 [2024-12-09 10:04:42.459639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.995 [2024-12-09 10:04:42.464745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.995 [2024-12-09 10:04:42.464821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.995 [2024-12-09 10:04:42.464844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.995 [2024-12-09 10:04:42.469916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.995 [2024-12-09 10:04:42.470029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.995 [2024-12-09 10:04:42.470061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:34.995 [2024-12-09 10:04:42.475035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.995 [2024-12-09 10:04:42.475117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.995 [2024-12-09 10:04:42.475139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:34.995 [2024-12-09 10:04:42.480175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.995 [2024-12-09 10:04:42.480246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.995 [2024-12-09 10:04:42.480269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:34.995 [2024-12-09 10:04:42.485346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.995 [2024-12-09 10:04:42.485419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.995 [2024-12-09 10:04:42.485442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:34.995 [2024-12-09 10:04:42.490507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:34.995 [2024-12-09 10:04:42.490598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:34.995 [2024-12-09 10:04:42.490621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.495730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.495807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.495831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.500904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.501002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.501056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.506117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.506220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.506249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.511322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.511396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.511420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.516513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.516600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.516623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.521775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.521887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.521910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.526933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.527021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.527045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.532171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.532250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.532273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.537538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.537628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.537651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.542924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.543062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.543094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.548143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.262 [2024-12-09 10:04:42.548228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.262 [2024-12-09 10:04:42.548251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.262 [2024-12-09 10:04:42.553278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.553363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.553387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.558518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.558621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.558644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.563745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.563837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.563862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.569002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.569105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.569127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.574339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.574416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.574439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.579484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.579575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.579598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.584791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.584867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.584889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.589906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.589993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.590017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.595122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.595219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.595241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.600264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.600344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.600367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.605398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.605473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.605495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.610628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.610731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.610754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.615899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.616013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.621073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.621160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.621193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.626354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.626447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.626471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.631571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.631647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.631671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.636883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.636973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.636998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.642182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.642277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.642308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.647389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.647473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.647497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.652695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.652794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.652829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.657991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.658069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.658093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.663281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.663354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.663377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.668453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.668539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.668563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.673570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.673645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.673667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.678736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.678811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.678833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.683879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.683992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.684025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.689079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.689166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.689189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.694242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.694320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.694343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.699414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.699501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.699524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.704586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.704671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.704694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.709729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.709816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.709839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.714917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.715010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.715033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.720086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.720177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.720199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.725226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.725300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.725324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.730391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.730466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.730488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.735571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.735659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.735682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.740724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.740800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.740823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.745914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.746003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.746028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.751109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.751184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.263 [2024-12-09 10:04:42.751208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.263 [2024-12-09 10:04:42.756316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.263 [2024-12-09 10:04:42.756391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.264 [2024-12-09 10:04:42.756415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.264 [2024-12-09 10:04:42.761458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.264 [2024-12-09 10:04:42.761544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.264 [2024-12-09 10:04:42.761567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.766635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.766725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.766748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.771750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.771845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.771867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.776870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.776946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.776983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.782008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.782085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.782108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.787169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.787255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.787277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.792286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.792380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.792404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.797403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.797488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.797510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.802632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.802730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.802753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.807759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.807835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.807858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.812932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.813022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.813044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.818152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.818225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.818248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.823296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.823373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.823396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.828460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.828548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.828571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.833600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.833677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.833700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.838780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.523 [2024-12-09 10:04:42.838856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.523 [2024-12-09 10:04:42.838879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.523 [2024-12-09 10:04:42.843948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.844049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.844084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.849215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.849293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.849318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.854391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.854486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.854522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.524 5954.00 IOPS, 744.25 MiB/s [2024-12-09T10:04:43.026Z] [2024-12-09 10:04:42.860585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.860661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.860685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.865766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.865851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.865874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.870911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.871048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.871080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.876043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.876163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.876186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.881279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.881355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.881378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.886434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.886506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.886529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.891504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.891605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.891627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.896680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.896770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.896793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.901915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.902040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.902072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.907085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.907170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.907199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.912269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.912354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.912377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.917399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.917487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.917509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.922562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.922698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.922730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.927792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.927881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.927904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.932981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.933089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.933111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.938298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.938389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.938411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.943545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.943621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.943644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.948898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.949033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.949064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.954075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.954174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.954199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.959229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.959321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.959344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.964364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.964451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.964474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.969492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.969582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.969605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.974706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.974782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.974805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.979918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.980005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.980028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.524 [2024-12-09 10:04:42.985121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.524 [2024-12-09 10:04:42.985197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.524 [2024-12-09 10:04:42.985219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.525 [2024-12-09 10:04:42.990296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.525 [2024-12-09 10:04:42.990382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.525 [2024-12-09 10:04:42.990404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.525 [2024-12-09 10:04:42.995454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.525 [2024-12-09 10:04:42.995552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.525 [2024-12-09 10:04:42.995586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.525 [2024-12-09 10:04:43.000575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.525 [2024-12-09 10:04:43.000652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.525 [2024-12-09 10:04:43.000674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.525 [2024-12-09 10:04:43.005736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.525 [2024-12-09 10:04:43.005812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.525 [2024-12-09 10:04:43.005834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.525 [2024-12-09 10:04:43.010873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.525 [2024-12-09 10:04:43.010947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.525 [2024-12-09 10:04:43.010983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.525 [2024-12-09 10:04:43.016078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.525 [2024-12-09 10:04:43.016154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.525 [2024-12-09 10:04:43.016176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.525 [2024-12-09 10:04:43.021292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.525 [2024-12-09 10:04:43.021375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.525 [2024-12-09 10:04:43.021397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.026501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.026600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.785 [2024-12-09 10:04:43.026623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.031695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.031768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.785 [2024-12-09 10:04:43.031806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.036932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.037018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.785 [2024-12-09 10:04:43.037040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.042153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.042228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.785 [2024-12-09 10:04:43.042267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.047371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.047457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.785 [2024-12-09 10:04:43.047480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.052611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.052692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.785 [2024-12-09 10:04:43.052715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.057771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.057857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.785 [2024-12-09 10:04:43.057881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.062990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.063107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.785 [2024-12-09 10:04:43.063138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.068185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.068273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.785 [2024-12-09 10:04:43.068296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.785 [2024-12-09 10:04:43.073330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.785 [2024-12-09 10:04:43.073415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.073437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.078488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.078575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.078599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.083726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.083820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.083842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.088945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.089030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.089053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.094149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.094243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.094292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.099366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.099442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.099463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.104592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.104663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.104685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.109768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.109845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.109867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.114969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.115058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.115080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.120206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.120292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.120315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.125341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.125425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.125447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.130571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.130660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.130682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.135851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.135928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.135951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.141115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.141207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.141235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.146280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.146366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.146388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.151466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.151542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.151564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.156710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.156784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.156806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.161902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.162026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.162075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.167145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.167216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.167239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.172298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.172384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.172407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.177442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.177514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.177536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.182603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.182691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.182714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.187820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.187895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.187917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.193061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.193136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.193158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.198222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.198324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.198346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.203431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.203503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.203524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.208699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.208770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.208792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.213837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.786 [2024-12-09 10:04:43.213913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.786 [2024-12-09 10:04:43.213936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.786 [2024-12-09 10:04:43.219051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.219121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.219143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.224277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.224347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.224370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.229477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.229550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.229572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.234751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.234826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.234847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.239906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.239993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.240015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.245165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.245239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.245261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.250343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.250428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.250450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.255546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.255639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.255661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.260790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.260876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.260898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.265966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.266079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.266102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.271268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.271345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.271368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.276433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.276518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.276541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:35.787 [2024-12-09 10:04:43.281622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:35.787 [2024-12-09 10:04:43.281707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:35.787 [2024-12-09 10:04:43.281729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.047 [2024-12-09 10:04:43.286790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.047 [2024-12-09 10:04:43.286867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.047 [2024-12-09 10:04:43.286889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.047 [2024-12-09 10:04:43.292000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.047 [2024-12-09 10:04:43.292085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.047 [2024-12-09 10:04:43.292108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.047 [2024-12-09 10:04:43.297243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.047 [2024-12-09 10:04:43.297337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.047 [2024-12-09 10:04:43.297359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.047 [2024-12-09 10:04:43.302591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.047 [2024-12-09 10:04:43.302664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.047 [2024-12-09 10:04:43.302686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.047 [2024-12-09 10:04:43.307774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.047 [2024-12-09 10:04:43.307850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.047 [2024-12-09 10:04:43.307873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.047 [2024-12-09 10:04:43.313066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.047 [2024-12-09 10:04:43.313165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.047 [2024-12-09 10:04:43.313187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.047 [2024-12-09 10:04:43.318206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.047 [2024-12-09 10:04:43.318280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.318303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.323400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.323485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.323507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.328516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.328602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.328624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.333778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.333871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.333894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.338994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.339086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.339109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.344256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.344345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.344367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.349411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.349506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.349528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.354584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.354675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.354697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.359765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.359840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.359862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.364979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.365056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.365078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.370143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.370220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.370244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.375356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.375460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.375483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.380758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.380872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.380895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.385861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.385944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.385982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.391061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.391159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.391181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.396188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.396277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.396300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.401353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.401439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.401462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.406512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.406589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.406612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.411733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.411827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.411850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.416911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.417007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.417030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.422128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.422226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.422249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.427387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.427488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.427511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.432664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.432739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.432762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.437926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.438029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.438052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.443136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.443227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.443250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.448325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.448411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.448433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.453571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.453662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.453685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.458788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.458865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.048 [2024-12-09 10:04:43.458888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.048 [2024-12-09 10:04:43.464181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.048 [2024-12-09 10:04:43.464276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.464299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.469417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.469518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.469541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.474761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.474861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.474883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.479930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.480033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.480057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.485178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.485266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.485289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.490308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.490394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.490416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.495457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.495549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.495572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.500571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.500648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.500671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.505750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.505827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.505850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.510907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.511010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.511033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.516116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.516191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.516213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.521316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.521406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.521429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.526467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.526543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.526566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.531634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.531712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.531734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.536794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.536869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.536892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.541950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.542053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.542077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.049 [2024-12-09 10:04:43.547084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.049 [2024-12-09 10:04:43.547181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.049 [2024-12-09 10:04:43.547203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.309 [2024-12-09 10:04:43.552191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.309 [2024-12-09 10:04:43.552277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.309 [2024-12-09 10:04:43.552300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.309 [2024-12-09 10:04:43.557337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.309 [2024-12-09 10:04:43.557422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.309 [2024-12-09 10:04:43.557444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.309 [2024-12-09 10:04:43.562482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.309 [2024-12-09 10:04:43.562568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.309 [2024-12-09 10:04:43.562591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.309 [2024-12-09 10:04:43.567635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.309 [2024-12-09 10:04:43.567727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.309 [2024-12-09 10:04:43.567749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.309 [2024-12-09 10:04:43.572762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.309 [2024-12-09 10:04:43.572836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.309 [2024-12-09 10:04:43.572858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.309 [2024-12-09 10:04:43.578006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.309 [2024-12-09 10:04:43.578080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.309 [2024-12-09 10:04:43.578104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.309 [2024-12-09 10:04:43.583144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.583232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.583255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.588295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.588369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.588393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.593475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.593571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.593594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.598639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.598738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.598760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.603801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.603875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.603898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.609010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.609070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.609093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.614148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.614235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.614257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.619294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.619380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.619403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.624460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.624547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.624570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.629604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.629702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.629725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.634777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.634851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.634874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.639906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.640013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.640037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.645074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.645164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.645188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.650198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.650272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.650295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.655378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.655462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.655484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.660534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.660620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.660643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.665711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.665791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.665814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.670840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.670925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.670948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.676035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.676157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.676180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.681182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.681264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.681287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.686404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.686491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.686513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.691664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.691740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.691762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.696932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.697088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.697111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.702187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.702278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.702301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.707477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.707577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.707599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.712789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.712862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.712885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.718038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.718130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.718153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.310 [2024-12-09 10:04:43.723310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.310 [2024-12-09 10:04:43.723397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.310 [2024-12-09 10:04:43.723419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.728622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.728736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.728759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.733871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.733946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.733969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.739208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.739301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.739324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.744334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.744421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.744448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.749585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.749672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.749694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.754745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.754837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.754860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.759998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.760121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.760143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.765335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.765442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.765464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.770523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.770598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.770621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.775838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.775914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.775937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.781151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.781242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.781266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.786323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.786410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.786433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.791457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.791531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.791554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.796771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.796842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.796864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.801953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.802043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.802065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.311 [2024-12-09 10:04:43.807182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.311 [2024-12-09 10:04:43.807257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.311 [2024-12-09 10:04:43.807279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.570 [2024-12-09 10:04:43.812345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.570 [2024-12-09 10:04:43.812438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.570 [2024-12-09 10:04:43.812461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.570 [2024-12-09 10:04:43.817532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.570 [2024-12-09 10:04:43.817607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.570 [2024-12-09 10:04:43.817629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.570 [2024-12-09 10:04:43.822685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.570 [2024-12-09 10:04:43.822776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.570 [2024-12-09 10:04:43.822798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.570 [2024-12-09 10:04:43.827835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.570 [2024-12-09 10:04:43.827912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.570 [2024-12-09 10:04:43.827935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.570 [2024-12-09 10:04:43.833028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.570 [2024-12-09 10:04:43.833125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.570 [2024-12-09 10:04:43.833147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.570 [2024-12-09 10:04:43.838164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.570 [2024-12-09 10:04:43.838249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.570 [2024-12-09 10:04:43.838272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:36.571 [2024-12-09 10:04:43.843275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.571 [2024-12-09 10:04:43.843359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.571 [2024-12-09 10:04:43.843382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:36.571 [2024-12-09 10:04:43.848426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.571 [2024-12-09 10:04:43.848515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.571 [2024-12-09 10:04:43.848537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:36.571 [2024-12-09 10:04:43.853593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb7d30) with pdu=0x200016eff3c8 00:52:36.571 [2024-12-09 10:04:43.853680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:36.571 [2024-12-09 10:04:43.853702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:36.571 5956.00 IOPS, 744.50 MiB/s 00:52:36.571 Latency(us) 00:52:36.571 [2024-12-09T10:04:44.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:36.571 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:52:36.571 nvme0n1 : 2.00 5954.09 744.26 0.00 0.00 2681.26 1824.58 6940.86 00:52:36.571 [2024-12-09T10:04:44.073Z] =================================================================================================================== 00:52:36.571 [2024-12-09T10:04:44.073Z] Total : 5954.09 744.26 0.00 0.00 2681.26 1824.58 6940.86 00:52:36.571 { 00:52:36.571 "results": [ 00:52:36.571 { 00:52:36.571 "job": "nvme0n1", 00:52:36.571 "core_mask": "0x2", 00:52:36.571 "workload": "randwrite", 00:52:36.571 "status": "finished", 00:52:36.571 "queue_depth": 16, 00:52:36.571 "io_size": 131072, 00:52:36.571 "runtime": 2.003328, 00:52:36.571 "iops": 5954.092390262603, 00:52:36.571 "mibps": 744.2615487828253, 00:52:36.571 "io_failed": 0, 00:52:36.571 "io_timeout": 0, 00:52:36.571 "avg_latency_us": 2681.2588695811232, 00:52:36.571 "min_latency_us": 1824.581818181818, 00:52:36.571 "max_latency_us": 6940.858181818182 00:52:36.571 } 00:52:36.571 ], 00:52:36.571 "core_count": 1 00:52:36.571 } 00:52:36.571 10:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:52:36.571 10:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:52:36.571 10:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:52:36.571 | .driver_specific 00:52:36.571 | .nvme_error 00:52:36.571 | .status_code 00:52:36.571 | .command_transient_transport_error' 00:52:36.571 10:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 385 > 0 )) 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80454 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80454 ']' 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80454 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80454 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:36.830 killing process with pid 80454 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80454' 00:52:36.830 Received shutdown signal, test time was about 2.000000 seconds 00:52:36.830 00:52:36.830 Latency(us) 00:52:36.830 [2024-12-09T10:04:44.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:36.830 [2024-12-09T10:04:44.332Z] =================================================================================================================== 00:52:36.830 [2024-12-09T10:04:44.332Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80454 00:52:36.830 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80454 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80276 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80276 ']' 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80276 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80276 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:37.089 killing process with pid 80276 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80276' 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80276 00:52:37.089 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80276 00:52:37.348 00:52:37.348 real 0m15.350s 00:52:37.348 user 0m30.495s 00:52:37.348 sys 0m4.186s 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:37.348 ************************************ 00:52:37.348 END TEST nvmf_digest_error 00:52:37.348 ************************************ 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:52:37.348 rmmod nvme_tcp 00:52:37.348 rmmod nvme_fabrics 00:52:37.348 rmmod nvme_keyring 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80276 ']' 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80276 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80276 ']' 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80276 00:52:37.348 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80276) - No such process 00:52:37.348 Process with pid 80276 is not found 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80276 is not found' 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:52:37.348 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:52:37.349 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:52:37.349 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:52:37.349 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:52:37.349 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:52:37.349 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:52:37.608 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:52:37.608 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:52:37.608 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:52:37.608 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:52:37.608 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:52:37.608 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:52:37.608 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:52:37.608 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:52:37.608 10:04:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:52:37.608 00:52:37.608 real 0m32.688s 00:52:37.608 user 1m3.048s 00:52:37.608 sys 0m9.061s 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:52:37.608 ************************************ 00:52:37.608 END TEST nvmf_digest 00:52:37.608 ************************************ 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.608 ************************************ 00:52:37.608 START TEST nvmf_host_multipath 00:52:37.608 ************************************ 00:52:37.608 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:52:37.869 * Looking for test storage... 00:52:37.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:52:37.869 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:37.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:37.870 --rc genhtml_branch_coverage=1 00:52:37.870 --rc genhtml_function_coverage=1 00:52:37.870 --rc genhtml_legend=1 00:52:37.870 --rc geninfo_all_blocks=1 00:52:37.870 --rc geninfo_unexecuted_blocks=1 00:52:37.870 00:52:37.870 ' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:37.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:37.870 --rc genhtml_branch_coverage=1 00:52:37.870 --rc genhtml_function_coverage=1 00:52:37.870 --rc genhtml_legend=1 00:52:37.870 --rc geninfo_all_blocks=1 00:52:37.870 --rc geninfo_unexecuted_blocks=1 00:52:37.870 00:52:37.870 ' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:37.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:37.870 --rc genhtml_branch_coverage=1 00:52:37.870 --rc genhtml_function_coverage=1 00:52:37.870 --rc genhtml_legend=1 00:52:37.870 --rc geninfo_all_blocks=1 00:52:37.870 --rc geninfo_unexecuted_blocks=1 00:52:37.870 00:52:37.870 ' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:37.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:37.870 --rc genhtml_branch_coverage=1 00:52:37.870 --rc genhtml_function_coverage=1 00:52:37.870 --rc genhtml_legend=1 00:52:37.870 --rc geninfo_all_blocks=1 00:52:37.870 --rc geninfo_unexecuted_blocks=1 00:52:37.870 00:52:37.870 ' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:52:37.870 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:52:37.870 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:52:37.871 Cannot find device "nvmf_init_br" 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:52:37.871 Cannot find device "nvmf_init_br2" 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:52:37.871 Cannot find device "nvmf_tgt_br" 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:52:37.871 Cannot find device "nvmf_tgt_br2" 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:52:37.871 Cannot find device "nvmf_init_br" 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:52:37.871 Cannot find device "nvmf_init_br2" 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:52:37.871 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:52:38.130 Cannot find device "nvmf_tgt_br" 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:52:38.130 Cannot find device "nvmf_tgt_br2" 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:52:38.130 Cannot find device "nvmf_br" 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:52:38.130 Cannot find device "nvmf_init_if" 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:52:38.130 Cannot find device "nvmf_init_if2" 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:52:38.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:52:38.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:52:38.130 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:52:38.389 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:52:38.389 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:52:38.389 00:52:38.389 --- 10.0.0.3 ping statistics --- 00:52:38.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:38.389 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:52:38.389 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:52:38.389 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:52:38.389 00:52:38.389 --- 10.0.0.4 ping statistics --- 00:52:38.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:38.389 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:52:38.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:38.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:52:38.389 00:52:38.389 --- 10.0.0.1 ping statistics --- 00:52:38.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:38.389 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:52:38.389 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:52:38.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:38.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:52:38.390 00:52:38.390 --- 10.0.0.2 ping statistics --- 00:52:38.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:38.390 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80761 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80761 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80761 ']' 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:38.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:38.390 10:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:52:38.390 [2024-12-09 10:04:45.806081] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:52:38.390 [2024-12-09 10:04:45.806178] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:38.648 [2024-12-09 10:04:45.979279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:52:38.648 [2024-12-09 10:04:46.025984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:38.648 [2024-12-09 10:04:46.026074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:38.648 [2024-12-09 10:04:46.026090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:38.648 [2024-12-09 10:04:46.026102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:38.648 [2024-12-09 10:04:46.026113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:38.648 [2024-12-09 10:04:46.027121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:38.648 [2024-12-09 10:04:46.027135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:38.648 [2024-12-09 10:04:46.060399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:52:38.648 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:38.648 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:52:38.648 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:38.648 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:38.648 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:52:38.907 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:38.907 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80761 00:52:38.907 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:52:39.164 [2024-12-09 10:04:46.444261] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:39.164 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:52:39.423 Malloc0 00:52:39.423 10:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:52:39.681 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:52:39.940 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:52:40.198 [2024-12-09 10:04:47.659054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:52:40.198 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:52:40.457 [2024-12-09 10:04:47.915181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80809 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80809 /var/tmp/bdevperf.sock 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80809 ']' 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:40.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:40.457 10:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:52:41.025 10:04:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:41.025 10:04:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:52:41.025 10:04:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:52:41.284 10:04:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:52:41.543 Nvme0n1 00:52:41.543 10:04:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:52:41.801 Nvme0n1 00:52:41.801 10:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:52:41.801 10:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:52:42.737 10:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:52:42.737 10:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:52:43.303 10:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:52:43.562 10:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:52:43.562 10:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80851 00:52:43.562 10:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80761 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:43.562 10:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:52:50.133 10:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:52:50.133 10:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:50.133 Attaching 4 probes... 00:52:50.133 @path[10.0.0.3, 4421]: 17011 00:52:50.133 @path[10.0.0.3, 4421]: 17368 00:52:50.133 @path[10.0.0.3, 4421]: 17399 00:52:50.133 @path[10.0.0.3, 4421]: 17376 00:52:50.133 @path[10.0.0.3, 4421]: 17366 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80851 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:52:50.133 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:52:50.391 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:52:50.391 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80761 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:50.391 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80961 00:52:50.391 10:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:52:56.952 10:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:52:56.952 10:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:56.952 Attaching 4 probes... 00:52:56.952 @path[10.0.0.3, 4420]: 17298 00:52:56.952 @path[10.0.0.3, 4420]: 17658 00:52:56.952 @path[10.0.0.3, 4420]: 17742 00:52:56.952 @path[10.0.0.3, 4420]: 17655 00:52:56.952 @path[10.0.0.3, 4420]: 17753 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80961 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:52:56.952 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:52:57.211 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:52:57.211 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80761 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:57.211 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81079 00:52:57.211 10:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:03.791 Attaching 4 probes... 00:53:03.791 @path[10.0.0.3, 4421]: 13380 00:53:03.791 @path[10.0.0.3, 4421]: 17140 00:53:03.791 @path[10.0.0.3, 4421]: 17022 00:53:03.791 @path[10.0.0.3, 4421]: 17339 00:53:03.791 @path[10.0.0.3, 4421]: 17168 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81079 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:53:03.791 10:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:53:03.791 10:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:53:04.049 10:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:53:04.049 10:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81197 00:53:04.049 10:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80761 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:53:04.049 10:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:10.635 Attaching 4 probes... 00:53:10.635 00:53:10.635 00:53:10.635 00:53:10.635 00:53:10.635 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81197 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:53:10.635 10:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:53:10.893 10:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:53:11.151 10:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:53:11.151 10:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81315 00:53:11.151 10:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:53:11.151 10:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80761 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:17.714 Attaching 4 probes... 00:53:17.714 @path[10.0.0.3, 4421]: 16563 00:53:17.714 @path[10.0.0.3, 4421]: 16913 00:53:17.714 @path[10.0.0.3, 4421]: 16874 00:53:17.714 @path[10.0.0.3, 4421]: 16920 00:53:17.714 @path[10.0.0.3, 4421]: 16912 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81315 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:17.714 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:53:17.714 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:53:18.648 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:53:18.648 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81433 00:53:18.648 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80761 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:53:18.648 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:25.247 Attaching 4 probes... 00:53:25.247 @path[10.0.0.3, 4420]: 16802 00:53:25.247 @path[10.0.0.3, 4420]: 16989 00:53:25.247 @path[10.0.0.3, 4420]: 17063 00:53:25.247 @path[10.0.0.3, 4420]: 17010 00:53:25.247 @path[10.0.0.3, 4420]: 17054 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81433 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:53:25.247 [2024-12-09 10:05:32.600168] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:53:25.247 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:53:25.506 10:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:53:32.077 10:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:53:32.077 10:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81613 00:53:32.077 10:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80761 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:53:32.077 10:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:53:38.649 10:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:53:38.649 10:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:38.649 Attaching 4 probes... 00:53:38.649 @path[10.0.0.3, 4421]: 16877 00:53:38.649 @path[10.0.0.3, 4421]: 17115 00:53:38.649 @path[10.0.0.3, 4421]: 17136 00:53:38.649 @path[10.0.0.3, 4421]: 17096 00:53:38.649 @path[10.0.0.3, 4421]: 17152 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81613 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80809 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80809 ']' 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80809 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80809 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:53:38.649 killing process with pid 80809 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80809' 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80809 00:53:38.649 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80809 00:53:38.649 { 00:53:38.649 "results": [ 00:53:38.649 { 00:53:38.649 "job": "Nvme0n1", 00:53:38.649 "core_mask": "0x4", 00:53:38.649 "workload": "verify", 00:53:38.649 "status": "terminated", 00:53:38.649 "verify_range": { 00:53:38.649 "start": 0, 00:53:38.649 "length": 16384 00:53:38.649 }, 00:53:38.649 "queue_depth": 128, 00:53:38.649 "io_size": 4096, 00:53:38.649 "runtime": 55.989851, 00:53:38.649 "iops": 7328.220966331916, 00:53:38.649 "mibps": 28.625863149734048, 00:53:38.649 "io_failed": 0, 00:53:38.649 "io_timeout": 0, 00:53:38.649 "avg_latency_us": 17435.97758260243, 00:53:38.649 "min_latency_us": 197.35272727272726, 00:53:38.649 "max_latency_us": 7046430.72 00:53:38.649 } 00:53:38.650 ], 00:53:38.650 "core_count": 1 00:53:38.650 } 00:53:38.650 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80809 00:53:38.650 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:53:38.650 [2024-12-09 10:04:47.976641] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:53:38.650 [2024-12-09 10:04:47.976730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80809 ] 00:53:38.650 [2024-12-09 10:04:48.125547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:38.650 [2024-12-09 10:04:48.158352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:38.650 [2024-12-09 10:04:48.188421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:38.650 Running I/O for 90 seconds... 00:53:38.650 6804.00 IOPS, 26.58 MiB/s [2024-12-09T10:05:46.152Z] 7713.50 IOPS, 30.13 MiB/s [2024-12-09T10:05:46.152Z] 8057.00 IOPS, 31.47 MiB/s [2024-12-09T10:05:46.152Z] 8212.75 IOPS, 32.08 MiB/s [2024-12-09T10:05:46.152Z] 8311.00 IOPS, 32.46 MiB/s [2024-12-09T10:05:46.152Z] 8376.50 IOPS, 32.72 MiB/s [2024-12-09T10:05:46.152Z] 8421.00 IOPS, 32.89 MiB/s [2024-12-09T10:05:46.152Z] 8444.38 IOPS, 32.99 MiB/s [2024-12-09T10:05:46.152Z] [2024-12-09 10:04:57.710451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.650 [2024-12-09 10:04:57.710530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.710606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.650 [2024-12-09 10:04:57.710632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.710656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.650 [2024-12-09 10:04:57.710672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.710694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.650 [2024-12-09 10:04:57.710709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.710731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.650 [2024-12-09 10:04:57.710746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.710768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.650 [2024-12-09 10:04:57.710783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.710805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.650 [2024-12-09 10:04:57.710820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.710842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.650 [2024-12-09 10:04:57.710857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.710879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.710894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.710916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.710978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.650 [2024-12-09 10:04:57.711403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:53:38.650 [2024-12-09 10:04:57.711424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.711826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.711871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.711911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.711970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.711995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.651 [2024-12-09 10:04:57.712525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.712563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.712601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:53:38.651 [2024-12-09 10:04:57.712623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.651 [2024-12-09 10:04:57.712639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.712661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.712676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.712698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.712713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.712735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.712751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.712773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.712789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.712811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.712826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.712848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.712864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.712886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.712902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.712924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.712946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.712989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.713008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.713046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.713083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.713121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.652 [2024-12-09 10:04:57.713159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:53:38.652 [2024-12-09 10:04:57.713772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.652 [2024-12-09 10:04:57.713786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.713808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.713824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.713846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.713862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.713884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.713899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.713928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.713944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.713982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.653 [2024-12-09 10:04:57.714465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.653 [2024-12-09 10:04:57.714502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.653 [2024-12-09 10:04:57.714540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.653 [2024-12-09 10:04:57.714577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.653 [2024-12-09 10:04:57.714623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.653 [2024-12-09 10:04:57.714661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.653 [2024-12-09 10:04:57.714698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.653 [2024-12-09 10:04:57.714735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.653 [2024-12-09 10:04:57.714930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:53:38.653 [2024-12-09 10:04:57.714952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.714982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.715006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.715022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.715044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.715059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.715081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.715096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.715118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.715134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.715156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.715171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.715193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.715208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.715233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.715249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.715271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.715286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.715309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.715324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.716853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.654 [2024-12-09 10:04:57.716903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.716936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:04:57.716955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.717001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:04:57.717017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.717040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:04:57.717056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.717078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:04:57.717098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.717120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:04:57.717136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.717158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:04:57.717173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.717196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:04:57.717211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:04:57.717249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:04:57.717271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:53:38.654 8450.67 IOPS, 33.01 MiB/s [2024-12-09T10:05:46.156Z] 8487.60 IOPS, 33.15 MiB/s [2024-12-09T10:05:46.156Z] 8520.00 IOPS, 33.28 MiB/s [2024-12-09T10:05:46.156Z] 8553.00 IOPS, 33.41 MiB/s [2024-12-09T10:05:46.156Z] 8572.62 IOPS, 33.49 MiB/s [2024-12-09T10:05:46.156Z] 8593.43 IOPS, 33.57 MiB/s [2024-12-09T10:05:46.156Z] [2024-12-09 10:05:04.331030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:05:04.331103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:05:04.331172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:05:04.331196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:05:04.331220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:05:04.331236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:05:04.331284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:05:04.331302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:05:04.331324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:05:04.331340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:05:04.331361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:05:04.331376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:05:04.331398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:05:04.331413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:05:04.331435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.654 [2024-12-09 10:05:04.331450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.654 [2024-12-09 10:05:04.331476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.655 [2024-12-09 10:05:04.331492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.655 [2024-12-09 10:05:04.331530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.655 [2024-12-09 10:05:04.331566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.655 [2024-12-09 10:05:04.331603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.655 [2024-12-09 10:05:04.331639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.655 [2024-12-09 10:05:04.331681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.655 [2024-12-09 10:05:04.331719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.655 [2024-12-09 10:05:04.331764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.331804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.331842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.331883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.331921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.331943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.331974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.655 [2024-12-09 10:05:04.332415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.655 [2024-12-09 10:05:04.332476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:53:38.655 [2024-12-09 10:05:04.332499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.332897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.332933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.332968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.332987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.656 [2024-12-09 10:05:04.333515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.333552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.333589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.656 [2024-12-09 10:05:04.333627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:53:38.656 [2024-12-09 10:05:04.333649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.333664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.333710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.333744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.333768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.333784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.333821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.333843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.333858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.333880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.333895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.333916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.333932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.333966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.333985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.657 [2024-12-09 10:05:04.334294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.657 [2024-12-09 10:05:04.334331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.657 [2024-12-09 10:05:04.334368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.657 [2024-12-09 10:05:04.334405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.657 [2024-12-09 10:05:04.334442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.657 [2024-12-09 10:05:04.334479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.657 [2024-12-09 10:05:04.334515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.657 [2024-12-09 10:05:04.334552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.657 [2024-12-09 10:05:04.334834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:53:38.657 [2024-12-09 10:05:04.334856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.334871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.334892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.334907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.334930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.334945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.334984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.335002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.335040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.335077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.335114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.335151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.335196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.335233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.335270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.335306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.335344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.335382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.335418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.335458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.335495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.335517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.335532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.337817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.337848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.337884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.337901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.337945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.337977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.338009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.338024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.338053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.338068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.338097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.338113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.338142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.338158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.338187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.338202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.338232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.658 [2024-12-09 10:05:04.338247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.338276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.338291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.338320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.338336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:53:38.658 [2024-12-09 10:05:04.338365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.658 [2024-12-09 10:05:04.338389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:04.338421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.659 [2024-12-09 10:05:04.338438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:04.338468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.659 [2024-12-09 10:05:04.338484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:04.338522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.659 [2024-12-09 10:05:04.338539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:04.338568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.659 [2024-12-09 10:05:04.338583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:04.338613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.659 [2024-12-09 10:05:04.338628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:53:38.659 8606.13 IOPS, 33.62 MiB/s [2024-12-09T10:05:46.161Z] 8068.25 IOPS, 31.52 MiB/s [2024-12-09T10:05:46.161Z] 8102.24 IOPS, 31.65 MiB/s [2024-12-09T10:05:46.161Z] 8124.33 IOPS, 31.74 MiB/s [2024-12-09T10:05:46.161Z] 8148.63 IOPS, 31.83 MiB/s [2024-12-09T10:05:46.161Z] 8173.70 IOPS, 31.93 MiB/s [2024-12-09T10:05:46.161Z] 8192.86 IOPS, 32.00 MiB/s [2024-12-09T10:05:46.161Z] 8208.45 IOPS, 32.06 MiB/s [2024-12-09T10:05:46.161Z] [2024-12-09 10:05:11.482585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.483086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.483255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.483372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.483466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.483548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.483637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.483718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.483831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.483915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.484045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.484181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.484273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.484370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.484451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.484538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.484637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.484755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.484856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.484942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.485061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.485148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.485248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.485321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.485413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.485498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.485586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.485671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.485759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.485851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.485944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.486053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.486144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.486239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.486333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.486419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.486512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.659 [2024-12-09 10:05:11.486593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:53:38.659 [2024-12-09 10:05:11.486682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.486767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.486859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.486945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.487083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.487158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.487254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.487337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.487426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.487507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.487586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.487667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.487767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.487853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.487943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.488034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.488154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.488253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.488348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.488433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.488521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.488612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.488706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.488793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.488882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.488981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.489246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.489329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.489450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.489538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.489619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.489690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.489778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.489860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.489953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.490065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.490158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.490241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.490339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.490413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.490510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.660 [2024-12-09 10:05:11.490596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.490688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.490775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.490856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.490936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.491056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.491132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.491219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.491301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.491397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.491487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.491580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.660 [2024-12-09 10:05:11.491676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:53:38.660 [2024-12-09 10:05:11.491771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.491857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.491946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.492067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.492181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.492277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.492358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.492429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.492505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.492591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.492685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.492770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.492859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.492945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.493067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.493157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.493258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.493340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.493428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.493509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.493604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.661 [2024-12-09 10:05:11.493691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.493780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.661 [2024-12-09 10:05:11.493878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.493984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.661 [2024-12-09 10:05:11.494082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.494175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.661 [2024-12-09 10:05:11.494247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.494335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.661 [2024-12-09 10:05:11.494420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.494513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.661 [2024-12-09 10:05:11.494599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.494697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.661 [2024-12-09 10:05:11.494784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.494881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.661 [2024-12-09 10:05:11.494991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.495080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.495151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.495248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.495321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.495408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.495490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.495570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.495650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.495728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.495798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.495893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.495991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:53:38.661 [2024-12-09 10:05:11.496118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.661 [2024-12-09 10:05:11.496215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.496310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.496401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.496481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.496561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.496639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.496740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.496836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.496909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.497016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.497103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.497194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.497275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.497372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.497453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.497548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.497630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.497717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.497803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.497900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.497997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.662 [2024-12-09 10:05:11.498743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.498780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.498818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.498858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:53:38.662 [2024-12-09 10:05:11.498880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.662 [2024-12-09 10:05:11.498895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.498917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.663 [2024-12-09 10:05:11.498932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.500311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.663 [2024-12-09 10:05:11.500399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.500493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.663 [2024-12-09 10:05:11.500586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.501531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.663 [2024-12-09 10:05:11.501652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.501762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.501849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.501948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.502066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.502169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.502272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.502369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.502441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.502536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.502609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.502702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.502774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.502868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.502984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.503127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.503225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.503315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.503398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.503493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.503579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.503679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.503760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.503854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.503926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.504042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.504156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.504246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.504329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.504415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.504500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.504619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.504701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.504744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.504762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.504791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.504807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.504835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.504851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:11.504879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:11.504895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:53:38.663 7905.48 IOPS, 30.88 MiB/s [2024-12-09T10:05:46.165Z] 7576.08 IOPS, 29.59 MiB/s [2024-12-09T10:05:46.165Z] 7273.04 IOPS, 28.41 MiB/s [2024-12-09T10:05:46.165Z] 6993.31 IOPS, 27.32 MiB/s [2024-12-09T10:05:46.165Z] 6734.30 IOPS, 26.31 MiB/s [2024-12-09T10:05:46.165Z] 6493.79 IOPS, 25.37 MiB/s [2024-12-09T10:05:46.165Z] 6269.86 IOPS, 24.49 MiB/s [2024-12-09T10:05:46.165Z] 6293.07 IOPS, 24.58 MiB/s [2024-12-09T10:05:46.165Z] 6363.35 IOPS, 24.86 MiB/s [2024-12-09T10:05:46.165Z] 6429.00 IOPS, 25.11 MiB/s [2024-12-09T10:05:46.165Z] 6489.70 IOPS, 25.35 MiB/s [2024-12-09T10:05:46.165Z] 6547.76 IOPS, 25.58 MiB/s [2024-12-09T10:05:46.165Z] 6602.51 IOPS, 25.79 MiB/s [2024-12-09T10:05:46.165Z] [2024-12-09 10:05:25.012789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:25.012853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:25.012928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:25.012951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:53:38.663 [2024-12-09 10:05:25.012988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.663 [2024-12-09 10:05:25.013007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.664 [2024-12-09 10:05:25.013810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.013974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.013994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.014009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.014024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.014047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.014064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.014079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.664 [2024-12-09 10:05:25.014095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.664 [2024-12-09 10:05:25.014109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.665 [2024-12-09 10:05:25.014594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.665 [2024-12-09 10:05:25.014625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.665 [2024-12-09 10:05:25.014655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.665 [2024-12-09 10:05:25.014685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.665 [2024-12-09 10:05:25.014714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.665 [2024-12-09 10:05:25.014743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.665 [2024-12-09 10:05:25.014773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.665 [2024-12-09 10:05:25.014803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.014970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.014988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.015004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.015021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.665 [2024-12-09 10:05:25.015035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.665 [2024-12-09 10:05:25.015051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.666 [2024-12-09 10:05:25.015810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.666 [2024-12-09 10:05:25.015949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.666 [2024-12-09 10:05:25.015976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.015993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.667 [2024-12-09 10:05:25.016356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.667 [2024-12-09 10:05:25.016388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.667 [2024-12-09 10:05:25.016426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.667 [2024-12-09 10:05:25.016459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.667 [2024-12-09 10:05:25.016489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.667 [2024-12-09 10:05:25.016519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.667 [2024-12-09 10:05:25.016548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:38.667 [2024-12-09 10:05:25.016578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.667 [2024-12-09 10:05:25.016786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.667 [2024-12-09 10:05:25.016808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.668 [2024-12-09 10:05:25.016823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.016839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.668 [2024-12-09 10:05:25.016852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.016868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.668 [2024-12-09 10:05:25.016884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.016900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.668 [2024-12-09 10:05:25.016915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.016932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.668 [2024-12-09 10:05:25.016947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.016974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.668 [2024-12-09 10:05:25.016990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.017005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.668 [2024-12-09 10:05:25.017020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.017036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.668 [2024-12-09 10:05:25.017049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.017065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520290 is same with the state(6) to be set 00:53:38.668 [2024-12-09 10:05:25.017092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:38.668 [2024-12-09 10:05:25.017103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:38.668 [2024-12-09 10:05:25.017114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59896 len:8 PRP1 0x0 PRP2 0x0 00:53:38.668 [2024-12-09 10:05:25.017127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.018297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:38.668 [2024-12-09 10:05:25.018382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.668 [2024-12-09 10:05:25.018406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:38.668 [2024-12-09 10:05:25.018445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24911e0 (9): Bad file descriptor 00:53:38.668 [2024-12-09 10:05:25.018853] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:53:38.668 [2024-12-09 10:05:25.018905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24911e0 with addr=10.0.0.3, port=4421 00:53:38.668 [2024-12-09 10:05:25.018924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24911e0 is same with the state(6) to be set 00:53:38.668 [2024-12-09 10:05:25.018990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24911e0 (9): Bad file descriptor 00:53:38.668 [2024-12-09 10:05:25.019027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:38.668 [2024-12-09 10:05:25.019044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:38.668 [2024-12-09 10:05:25.019059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:38.668 [2024-12-09 10:05:25.019073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:38.668 [2024-12-09 10:05:25.019088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:38.668 6652.92 IOPS, 25.99 MiB/s [2024-12-09T10:05:46.170Z] 6704.89 IOPS, 26.19 MiB/s [2024-12-09T10:05:46.170Z] 6749.92 IOPS, 26.37 MiB/s [2024-12-09T10:05:46.170Z] 6795.92 IOPS, 26.55 MiB/s [2024-12-09T10:05:46.170Z] 6839.23 IOPS, 26.72 MiB/s [2024-12-09T10:05:46.170Z] 6880.22 IOPS, 26.88 MiB/s [2024-12-09T10:05:46.170Z] 6919.74 IOPS, 27.03 MiB/s [2024-12-09T10:05:46.170Z] 6957.42 IOPS, 27.18 MiB/s [2024-12-09T10:05:46.170Z] 6993.11 IOPS, 27.32 MiB/s [2024-12-09T10:05:46.170Z] 7027.58 IOPS, 27.45 MiB/s [2024-12-09T10:05:46.170Z] [2024-12-09 10:05:35.083028] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:53:38.668 7060.39 IOPS, 27.58 MiB/s [2024-12-09T10:05:46.170Z] 7092.81 IOPS, 27.71 MiB/s [2024-12-09T10:05:46.170Z] 7124.79 IOPS, 27.83 MiB/s [2024-12-09T10:05:46.170Z] 7155.47 IOPS, 27.95 MiB/s [2024-12-09T10:05:46.170Z] 7184.20 IOPS, 28.06 MiB/s [2024-12-09T10:05:46.170Z] 7209.45 IOPS, 28.16 MiB/s [2024-12-09T10:05:46.170Z] 7235.42 IOPS, 28.26 MiB/s [2024-12-09T10:05:46.170Z] 7260.72 IOPS, 28.36 MiB/s [2024-12-09T10:05:46.170Z] 7284.93 IOPS, 28.46 MiB/s [2024-12-09T10:05:46.170Z] 7308.25 IOPS, 28.55 MiB/s [2024-12-09T10:05:46.170Z] Received shutdown signal, test time was about 55.990741 seconds 00:53:38.668 00:53:38.668 Latency(us) 00:53:38.668 [2024-12-09T10:05:46.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:38.668 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:53:38.668 Verification LBA range: start 0x0 length 0x4000 00:53:38.668 Nvme0n1 : 55.99 7328.22 28.63 0.00 0.00 17435.98 197.35 7046430.72 00:53:38.668 [2024-12-09T10:05:46.170Z] =================================================================================================================== 00:53:38.668 [2024-12-09T10:05:46.170Z] Total : 7328.22 28.63 0.00 0.00 17435.98 197.35 7046430.72 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:38.668 rmmod nvme_tcp 00:53:38.668 rmmod nvme_fabrics 00:53:38.668 rmmod nvme_keyring 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:38.668 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80761 ']' 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80761 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80761 ']' 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80761 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80761 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80761' 00:53:38.669 killing process with pid 80761 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80761 00:53:38.669 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80761 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:38.669 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:53:38.928 00:53:38.928 real 1m1.277s 00:53:38.928 user 2m49.986s 00:53:38.928 sys 0m18.374s 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:38.928 ************************************ 00:53:38.928 END TEST nvmf_host_multipath 00:53:38.928 ************************************ 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:53:38.928 10:05:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:53:38.929 10:05:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:38.929 10:05:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:38.929 10:05:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:53:38.929 ************************************ 00:53:38.929 START TEST nvmf_timeout 00:53:38.929 ************************************ 00:53:38.929 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:53:39.188 * Looking for test storage... 00:53:39.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:53:39.188 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:39.189 --rc genhtml_branch_coverage=1 00:53:39.189 --rc genhtml_function_coverage=1 00:53:39.189 --rc genhtml_legend=1 00:53:39.189 --rc geninfo_all_blocks=1 00:53:39.189 --rc geninfo_unexecuted_blocks=1 00:53:39.189 00:53:39.189 ' 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:39.189 --rc genhtml_branch_coverage=1 00:53:39.189 --rc genhtml_function_coverage=1 00:53:39.189 --rc genhtml_legend=1 00:53:39.189 --rc geninfo_all_blocks=1 00:53:39.189 --rc geninfo_unexecuted_blocks=1 00:53:39.189 00:53:39.189 ' 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:39.189 --rc genhtml_branch_coverage=1 00:53:39.189 --rc genhtml_function_coverage=1 00:53:39.189 --rc genhtml_legend=1 00:53:39.189 --rc geninfo_all_blocks=1 00:53:39.189 --rc geninfo_unexecuted_blocks=1 00:53:39.189 00:53:39.189 ' 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:39.189 --rc genhtml_branch_coverage=1 00:53:39.189 --rc genhtml_function_coverage=1 00:53:39.189 --rc genhtml_legend=1 00:53:39.189 --rc geninfo_all_blocks=1 00:53:39.189 --rc geninfo_unexecuted_blocks=1 00:53:39.189 00:53:39.189 ' 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:39.189 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:39.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:39.190 Cannot find device "nvmf_init_br" 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:39.190 Cannot find device "nvmf_init_br2" 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:39.190 Cannot find device "nvmf_tgt_br" 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:39.190 Cannot find device "nvmf_tgt_br2" 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:39.190 Cannot find device "nvmf_init_br" 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:53:39.190 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:39.449 Cannot find device "nvmf_init_br2" 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:39.449 Cannot find device "nvmf_tgt_br" 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:39.449 Cannot find device "nvmf_tgt_br2" 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:39.449 Cannot find device "nvmf_br" 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:39.449 Cannot find device "nvmf_init_if" 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:39.449 Cannot find device "nvmf_init_if2" 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:39.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:39.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:39.449 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:39.450 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:39.709 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:39.709 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:39.709 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:39.709 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:39.709 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:39.709 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:39.709 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:39.709 10:05:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:39.709 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:39.709 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:53:39.709 00:53:39.709 --- 10.0.0.3 ping statistics --- 00:53:39.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:39.709 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:39.709 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:39.709 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:53:39.709 00:53:39.709 --- 10.0.0.4 ping statistics --- 00:53:39.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:39.709 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:39.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:39.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:53:39.709 00:53:39.709 --- 10.0.0.1 ping statistics --- 00:53:39.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:39.709 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:39.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:39.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:53:39.709 00:53:39.709 --- 10.0.0.2 ping statistics --- 00:53:39.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:39.709 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81970 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81970 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81970 ']' 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:39.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:39.709 10:05:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:53:39.709 [2024-12-09 10:05:47.120377] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:53:39.709 [2024-12-09 10:05:47.120470] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:39.968 [2024-12-09 10:05:47.275112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:53:39.968 [2024-12-09 10:05:47.312841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:39.968 [2024-12-09 10:05:47.312893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:39.968 [2024-12-09 10:05:47.312907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:39.968 [2024-12-09 10:05:47.312917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:39.968 [2024-12-09 10:05:47.312926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:39.968 [2024-12-09 10:05:47.313791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:39.968 [2024-12-09 10:05:47.313810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:39.968 [2024-12-09 10:05:47.348036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:40.903 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:40.903 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:53:40.903 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:40.903 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:40.903 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:53:40.903 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:40.903 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:53:40.903 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:53:41.162 [2024-12-09 10:05:48.459756] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:41.162 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:53:41.420 Malloc0 00:53:41.420 10:05:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:53:41.727 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:53:41.985 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:42.243 [2024-12-09 10:05:49.653318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:42.243 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82023 00:53:42.243 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82023 /var/tmp/bdevperf.sock 00:53:42.243 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:53:42.243 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82023 ']' 00:53:42.243 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:53:42.243 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:42.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:53:42.243 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:53:42.243 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:42.243 10:05:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:53:42.243 [2024-12-09 10:05:49.731016] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:53:42.243 [2024-12-09 10:05:49.731113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82023 ] 00:53:42.502 [2024-12-09 10:05:49.889483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:42.502 [2024-12-09 10:05:49.929030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:42.502 [2024-12-09 10:05:49.964883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:42.759 10:05:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:42.760 10:05:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:53:42.760 10:05:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:53:43.018 10:05:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:53:43.276 NVMe0n1 00:53:43.276 10:05:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82035 00:53:43.276 10:05:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:43.276 10:05:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:53:43.276 Running I/O for 10 seconds... 00:53:44.211 10:05:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:44.472 6677.00 IOPS, 26.08 MiB/s [2024-12-09T10:05:51.974Z] [2024-12-09 10:05:51.951511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.472 [2024-12-09 10:05:51.951867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.472 [2024-12-09 10:05:51.951878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.951888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.951899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.951909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.951921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.951931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.951942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.951952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.951985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.951997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.473 [2024-12-09 10:05:51.952767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.473 [2024-12-09 10:05:51.952776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.952985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.952995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.474 [2024-12-09 10:05:51.953603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.474 [2024-12-09 10:05:51.953614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.475 [2024-12-09 10:05:51.953623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.953935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.475 [2024-12-09 10:05:51.953965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.953979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.475 [2024-12-09 10:05:51.953989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:44.475 [2024-12-09 10:05:51.954157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:44.475 [2024-12-09 10:05:51.954302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880690 is same with the state(6) to be set 00:53:44.475 [2024-12-09 10:05:51.954325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:44.475 [2024-12-09 10:05:51.954332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:44.475 [2024-12-09 10:05:51.954346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63888 len:8 PRP1 0x0 PRP2 0x0 00:53:44.475 [2024-12-09 10:05:51.954355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:44.475 [2024-12-09 10:05:51.954657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:53:44.475 [2024-12-09 10:05:51.954745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820e50 (9): Bad file descriptor 00:53:44.475 [2024-12-09 10:05:51.954849] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:53:44.475 [2024-12-09 10:05:51.954871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x820e50 with addr=10.0.0.3, port=4420 00:53:44.475 [2024-12-09 10:05:51.954882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820e50 is same with the state(6) to be set 00:53:44.475 [2024-12-09 10:05:51.954901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820e50 (9): Bad file descriptor 00:53:44.475 [2024-12-09 10:05:51.954917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:53:44.475 [2024-12-09 10:05:51.954927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:53:44.475 [2024-12-09 10:05:51.954938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:53:44.475 [2024-12-09 10:05:51.954950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:53:44.475 [2024-12-09 10:05:51.954975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:53:44.476 10:05:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:53:46.346 3978.50 IOPS, 15.54 MiB/s [2024-12-09T10:05:54.107Z] 2652.33 IOPS, 10.36 MiB/s [2024-12-09T10:05:54.107Z] [2024-12-09 10:05:53.955157] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:53:46.605 [2024-12-09 10:05:53.955362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x820e50 with addr=10.0.0.3, port=4420 00:53:46.605 [2024-12-09 10:05:53.955516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820e50 is same with the state(6) to be set 00:53:46.605 [2024-12-09 10:05:53.955553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820e50 (9): Bad file descriptor 00:53:46.605 [2024-12-09 10:05:53.955593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:53:46.605 [2024-12-09 10:05:53.955606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:53:46.605 [2024-12-09 10:05:53.955617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:53:46.605 [2024-12-09 10:05:53.955629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:53:46.605 [2024-12-09 10:05:53.955640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:53:46.605 10:05:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:53:46.605 10:05:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:53:46.605 10:05:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:53:46.863 10:05:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:53:46.863 10:05:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:53:46.863 10:05:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:53:46.863 10:05:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:53:47.122 10:05:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:53:47.122 10:05:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:53:48.315 1989.25 IOPS, 7.77 MiB/s [2024-12-09T10:05:56.075Z] 1591.40 IOPS, 6.22 MiB/s [2024-12-09T10:05:56.075Z] [2024-12-09 10:05:55.955828] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:53:48.573 [2024-12-09 10:05:55.955902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x820e50 with addr=10.0.0.3, port=4420 00:53:48.573 [2024-12-09 10:05:55.955929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820e50 is same with the state(6) to be set 00:53:48.573 [2024-12-09 10:05:55.956015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820e50 (9): Bad file descriptor 00:53:48.573 [2024-12-09 10:05:55.956087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:53:48.573 [2024-12-09 10:05:55.956106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:53:48.573 [2024-12-09 10:05:55.956132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:53:48.573 [2024-12-09 10:05:55.956157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:53:48.573 [2024-12-09 10:05:55.956175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:53:50.441 1326.17 IOPS, 5.18 MiB/s [2024-12-09T10:05:58.201Z] 1136.71 IOPS, 4.44 MiB/s [2024-12-09T10:05:58.201Z] [2024-12-09 10:05:57.956229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:53:50.699 [2024-12-09 10:05:57.956291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:53:50.699 [2024-12-09 10:05:57.956304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:53:50.699 [2024-12-09 10:05:57.956314] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:53:50.699 [2024-12-09 10:05:57.956326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:53:51.524 994.62 IOPS, 3.89 MiB/s 00:53:51.524 Latency(us) 00:53:51.524 [2024-12-09T10:05:59.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:51.524 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:53:51.525 Verification LBA range: start 0x0 length 0x4000 00:53:51.525 NVMe0n1 : 8.19 971.31 3.79 15.62 0.00 129506.15 4379.00 7015926.69 00:53:51.525 [2024-12-09T10:05:59.027Z] =================================================================================================================== 00:53:51.525 [2024-12-09T10:05:59.027Z] Total : 971.31 3.79 15.62 0.00 129506.15 4379.00 7015926.69 00:53:51.525 { 00:53:51.525 "results": [ 00:53:51.525 { 00:53:51.525 "job": "NVMe0n1", 00:53:51.525 "core_mask": "0x4", 00:53:51.525 "workload": "verify", 00:53:51.525 "status": "finished", 00:53:51.525 "verify_range": { 00:53:51.525 "start": 0, 00:53:51.525 "length": 16384 00:53:51.525 }, 00:53:51.525 "queue_depth": 128, 00:53:51.525 "io_size": 4096, 00:53:51.525 "runtime": 8.192012, 00:53:51.525 "iops": 971.3120537421088, 00:53:51.525 "mibps": 3.7941877099301125, 00:53:51.525 "io_failed": 128, 00:53:51.525 "io_timeout": 0, 00:53:51.525 "avg_latency_us": 129506.15310867489, 00:53:51.525 "min_latency_us": 4378.996363636364, 00:53:51.525 "max_latency_us": 7015926.69090909 00:53:51.525 } 00:53:51.525 ], 00:53:51.525 "core_count": 1 00:53:51.525 } 00:53:52.094 10:05:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:53:52.094 10:05:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:53:52.094 10:05:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:53:52.667 10:05:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:53:52.667 10:05:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:53:52.667 10:05:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:53:52.667 10:05:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:53:52.667 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:53:52.667 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82035 00:53:52.667 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82023 00:53:52.667 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82023 ']' 00:53:52.667 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82023 00:53:52.926 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:53:52.926 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:52.926 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82023 00:53:52.926 killing process with pid 82023 00:53:52.926 Received shutdown signal, test time was about 9.434727 seconds 00:53:52.926 00:53:52.926 Latency(us) 00:53:52.926 [2024-12-09T10:06:00.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:52.926 [2024-12-09T10:06:00.428Z] =================================================================================================================== 00:53:52.926 [2024-12-09T10:06:00.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:52.926 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:53:52.926 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:53:52.926 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82023' 00:53:52.926 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82023 00:53:52.926 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82023 00:53:52.926 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:53.184 [2024-12-09 10:06:00.661418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:53.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:53:53.184 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82165 00:53:53.184 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:53:53.185 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82165 /var/tmp/bdevperf.sock 00:53:53.185 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82165 ']' 00:53:53.185 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:53:53.185 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:53.185 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:53:53.185 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:53.185 10:06:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:53:53.443 [2024-12-09 10:06:00.741602] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:53:53.443 [2024-12-09 10:06:00.741702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82165 ] 00:53:53.443 [2024-12-09 10:06:00.897029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:53.443 [2024-12-09 10:06:00.936408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:53.716 [2024-12-09 10:06:00.971326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:53:54.283 10:06:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:54.283 10:06:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:53:54.283 10:06:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:53:54.541 10:06:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:53:55.108 NVMe0n1 00:53:55.108 10:06:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82190 00:53:55.108 10:06:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:55.108 10:06:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:53:55.108 Running I/O for 10 seconds... 00:53:56.043 10:06:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:56.304 6689.00 IOPS, 26.13 MiB/s [2024-12-09T10:06:03.806Z] [2024-12-09 10:06:03.645556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.645979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.645992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.646004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.646014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.646029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.646039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.646051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.646061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.646072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.646082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.646094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.646104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.646115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.646125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.304 [2024-12-09 10:06:03.646137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.304 [2024-12-09 10:06:03.646146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.646983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.646993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.647005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.305 [2024-12-09 10:06:03.647015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.305 [2024-12-09 10:06:03.647027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.306 [2024-12-09 10:06:03.647540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.306 [2024-12-09 10:06:03.647893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.306 [2024-12-09 10:06:03.647902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.647914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.647923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.647936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.647945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.647967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.307 [2024-12-09 10:06:03.647979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.647991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.307 [2024-12-09 10:06:03.648000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.307 [2024-12-09 10:06:03.648026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.307 [2024-12-09 10:06:03.648049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.307 [2024-12-09 10:06:03.648083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.307 [2024-12-09 10:06:03.648104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:56.307 [2024-12-09 10:06:03.648275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:56.307 [2024-12-09 10:06:03.648432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174f690 is same with the state(6) to be set 00:53:56.307 [2024-12-09 10:06:03.648455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:56.307 [2024-12-09 10:06:03.648464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:56.307 [2024-12-09 10:06:03.648473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64072 len:8 PRP1 0x0 PRP2 0x0 00:53:56.307 [2024-12-09 10:06:03.648482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:53:56.307 [2024-12-09 10:06:03.648628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:53:56.307 [2024-12-09 10:06:03.648649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:53:56.307 [2024-12-09 10:06:03.648668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:53:56.307 [2024-12-09 10:06:03.648688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:56.307 [2024-12-09 10:06:03.648697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16efe50 is same with the state(6) to be set 00:53:56.307 [2024-12-09 10:06:03.648938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:56.307 [2024-12-09 10:06:03.648986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16efe50 (9): Bad file descriptor 00:53:56.307 [2024-12-09 10:06:03.649085] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:53:56.307 [2024-12-09 10:06:03.649108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16efe50 with addr=10.0.0.3, port=4420 00:53:56.307 [2024-12-09 10:06:03.649119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16efe50 is same with the state(6) to be set 00:53:56.307 [2024-12-09 10:06:03.649138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16efe50 (9): Bad file descriptor 00:53:56.307 [2024-12-09 10:06:03.649155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:56.307 [2024-12-09 10:06:03.649165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:56.307 [2024-12-09 10:06:03.649175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:56.307 [2024-12-09 10:06:03.649186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:56.307 [2024-12-09 10:06:03.649197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:56.307 10:06:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:53:57.242 3984.50 IOPS, 15.56 MiB/s [2024-12-09T10:06:04.744Z] [2024-12-09 10:06:04.649360] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:53:57.242 [2024-12-09 10:06:04.649603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16efe50 with addr=10.0.0.3, port=4420 00:53:57.242 [2024-12-09 10:06:04.649761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16efe50 is same with the state(6) to be set 00:53:57.242 [2024-12-09 10:06:04.649841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16efe50 (9): Bad file descriptor 00:53:57.242 [2024-12-09 10:06:04.650095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:57.242 [2024-12-09 10:06:04.650155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:57.242 [2024-12-09 10:06:04.650210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:57.242 [2024-12-09 10:06:04.650382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:57.242 [2024-12-09 10:06:04.650450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:57.242 10:06:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:57.500 [2024-12-09 10:06:04.967413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:57.500 10:06:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82190 00:53:58.327 2656.33 IOPS, 10.38 MiB/s [2024-12-09T10:06:05.829Z] [2024-12-09 10:06:05.666837] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:54:00.198 1992.25 IOPS, 7.78 MiB/s [2024-12-09T10:06:08.635Z] 2697.40 IOPS, 10.54 MiB/s [2024-12-09T10:06:09.624Z] 3378.33 IOPS, 13.20 MiB/s [2024-12-09T10:06:10.559Z] 3846.71 IOPS, 15.03 MiB/s [2024-12-09T10:06:11.494Z] 4215.75 IOPS, 16.47 MiB/s [2024-12-09T10:06:12.870Z] 4513.56 IOPS, 17.63 MiB/s [2024-12-09T10:06:12.870Z] 4742.20 IOPS, 18.52 MiB/s 00:54:05.368 Latency(us) 00:54:05.368 [2024-12-09T10:06:12.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:05.368 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:54:05.368 Verification LBA range: start 0x0 length 0x4000 00:54:05.368 NVMe0n1 : 10.01 4748.97 18.55 0.00 0.00 26905.76 3961.95 3019898.88 00:54:05.368 [2024-12-09T10:06:12.870Z] =================================================================================================================== 00:54:05.368 [2024-12-09T10:06:12.870Z] Total : 4748.97 18.55 0.00 0.00 26905.76 3961.95 3019898.88 00:54:05.368 { 00:54:05.368 "results": [ 00:54:05.368 { 00:54:05.368 "job": "NVMe0n1", 00:54:05.368 "core_mask": "0x4", 00:54:05.368 "workload": "verify", 00:54:05.368 "status": "finished", 00:54:05.368 "verify_range": { 00:54:05.368 "start": 0, 00:54:05.368 "length": 16384 00:54:05.368 }, 00:54:05.368 "queue_depth": 128, 00:54:05.368 "io_size": 4096, 00:54:05.368 "runtime": 10.009542, 00:54:05.368 "iops": 4748.968534224643, 00:54:05.368 "mibps": 18.55065833681501, 00:54:05.368 "io_failed": 0, 00:54:05.368 "io_timeout": 0, 00:54:05.368 "avg_latency_us": 26905.75601319602, 00:54:05.368 "min_latency_us": 3961.949090909091, 00:54:05.368 "max_latency_us": 3019898.88 00:54:05.368 } 00:54:05.368 ], 00:54:05.368 "core_count": 1 00:54:05.368 } 00:54:05.368 10:06:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82290 00:54:05.368 10:06:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:54:05.368 10:06:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:54:05.368 Running I/O for 10 seconds... 00:54:06.305 10:06:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:06.305 6804.00 IOPS, 26.58 MiB/s [2024-12-09T10:06:13.807Z] [2024-12-09 10:06:13.748384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.305 [2024-12-09 10:06:13.748624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.305 [2024-12-09 10:06:13.748781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.305 [2024-12-09 10:06:13.748928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.305 [2024-12-09 10:06:13.749228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.305 [2024-12-09 10:06:13.749385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.305 [2024-12-09 10:06:13.749585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.305 [2024-12-09 10:06:13.749722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.305 [2024-12-09 10:06:13.749936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.749952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.749979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.749990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.306 [2024-12-09 10:06:13.750811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.306 [2024-12-09 10:06:13.750820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.750832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.750842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.750854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.750864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.750875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.750885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.750896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.750906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.750918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.750928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.750940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.750949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.750973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.750985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.750997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.307 [2024-12-09 10:06:13.751598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.307 [2024-12-09 10:06:13.751682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.307 [2024-12-09 10:06:13.751694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:54:06.308 [2024-12-09 10:06:13.751767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.751978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.751990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.308 [2024-12-09 10:06:13.752586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.308 [2024-12-09 10:06:13.752596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.309 [2024-12-09 10:06:13.752607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.309 [2024-12-09 10:06:13.752617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.309 [2024-12-09 10:06:13.752628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.309 [2024-12-09 10:06:13.752638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.309 [2024-12-09 10:06:13.752649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174e1b0 is same with the state(6) to be set 00:54:06.309 [2024-12-09 10:06:13.752663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:54:06.309 [2024-12-09 10:06:13.752671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:54:06.309 [2024-12-09 10:06:13.752680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62096 len:8 PRP1 0x0 PRP2 0x0 00:54:06.309 [2024-12-09 10:06:13.752689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.309 [2024-12-09 10:06:13.752990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:54:06.309 [2024-12-09 10:06:13.753082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16efe50 (9): Bad file descriptor 00:54:06.309 [2024-12-09 10:06:13.753187] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:54:06.309 [2024-12-09 10:06:13.753209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16efe50 with addr=10.0.0.3, port=4420 00:54:06.309 [2024-12-09 10:06:13.753221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16efe50 is same with the state(6) to be set 00:54:06.309 [2024-12-09 10:06:13.753240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16efe50 (9): Bad file descriptor 00:54:06.309 [2024-12-09 10:06:13.753257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:54:06.309 [2024-12-09 10:06:13.753267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:54:06.309 [2024-12-09 10:06:13.753278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:54:06.309 [2024-12-09 10:06:13.753289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:54:06.309 [2024-12-09 10:06:13.753300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:54:06.309 10:06:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:54:07.500 3850.50 IOPS, 15.04 MiB/s [2024-12-09T10:06:15.002Z] [2024-12-09 10:06:14.753442] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:54:07.500 [2024-12-09 10:06:14.753524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16efe50 with addr=10.0.0.3, port=4420 00:54:07.500 [2024-12-09 10:06:14.753542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16efe50 is same with the state(6) to be set 00:54:07.500 [2024-12-09 10:06:14.753567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16efe50 (9): Bad file descriptor 00:54:07.500 [2024-12-09 10:06:14.753587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:54:07.500 [2024-12-09 10:06:14.753598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:54:07.500 [2024-12-09 10:06:14.753609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:54:07.500 [2024-12-09 10:06:14.753621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:54:07.500 [2024-12-09 10:06:14.753634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:54:08.444 2567.00 IOPS, 10.03 MiB/s [2024-12-09T10:06:15.946Z] [2024-12-09 10:06:15.753750] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:54:08.444 [2024-12-09 10:06:15.753817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16efe50 with addr=10.0.0.3, port=4420 00:54:08.444 [2024-12-09 10:06:15.753834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16efe50 is same with the state(6) to be set 00:54:08.444 [2024-12-09 10:06:15.753857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16efe50 (9): Bad file descriptor 00:54:08.444 [2024-12-09 10:06:15.753877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:54:08.444 [2024-12-09 10:06:15.753887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:54:08.444 [2024-12-09 10:06:15.753898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:54:08.444 [2024-12-09 10:06:15.753909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:54:08.444 [2024-12-09 10:06:15.753921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:54:09.378 1925.25 IOPS, 7.52 MiB/s [2024-12-09T10:06:16.880Z] [2024-12-09 10:06:16.757441] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.378 [2024-12-09 10:06:16.757509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16efe50 with addr=10.0.0.3, port=4420 00:54:09.378 [2024-12-09 10:06:16.757525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16efe50 is same with the state(6) to be set 00:54:09.378 [2024-12-09 10:06:16.757767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16efe50 (9): Bad file descriptor 00:54:09.378 [2024-12-09 10:06:16.758042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:54:09.378 [2024-12-09 10:06:16.758070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:54:09.378 [2024-12-09 10:06:16.758082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:54:09.378 [2024-12-09 10:06:16.758110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:54:09.378 [2024-12-09 10:06:16.758122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:54:09.378 10:06:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:09.637 [2024-12-09 10:06:17.046210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:09.637 10:06:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82290 00:54:10.462 1540.20 IOPS, 6.02 MiB/s [2024-12-09T10:06:17.964Z] [2024-12-09 10:06:17.783285] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:54:12.337 2583.33 IOPS, 10.09 MiB/s [2024-12-09T10:06:20.789Z] 3552.57 IOPS, 13.88 MiB/s [2024-12-09T10:06:21.724Z] 4264.50 IOPS, 16.66 MiB/s [2024-12-09T10:06:22.658Z] 4821.78 IOPS, 18.84 MiB/s [2024-12-09T10:06:22.658Z] 5266.00 IOPS, 20.57 MiB/s 00:54:15.156 Latency(us) 00:54:15.156 [2024-12-09T10:06:22.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:15.156 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:54:15.156 Verification LBA range: start 0x0 length 0x4000 00:54:15.156 NVMe0n1 : 10.01 5269.39 20.58 3575.33 0.00 14439.75 707.49 3019898.88 00:54:15.156 [2024-12-09T10:06:22.658Z] =================================================================================================================== 00:54:15.156 [2024-12-09T10:06:22.658Z] Total : 5269.39 20.58 3575.33 0.00 14439.75 0.00 3019898.88 00:54:15.156 { 00:54:15.156 "results": [ 00:54:15.156 { 00:54:15.156 "job": "NVMe0n1", 00:54:15.156 "core_mask": "0x4", 00:54:15.156 "workload": "verify", 00:54:15.156 "status": "finished", 00:54:15.156 "verify_range": { 00:54:15.156 "start": 0, 00:54:15.156 "length": 16384 00:54:15.156 }, 00:54:15.156 "queue_depth": 128, 00:54:15.156 "io_size": 4096, 00:54:15.156 "runtime": 10.010261, 00:54:15.156 "iops": 5269.393075764958, 00:54:15.156 "mibps": 20.583566702206866, 00:54:15.157 "io_failed": 35790, 00:54:15.157 "io_timeout": 0, 00:54:15.157 "avg_latency_us": 14439.753223741629, 00:54:15.157 "min_latency_us": 707.4909090909091, 00:54:15.157 "max_latency_us": 3019898.88 00:54:15.157 } 00:54:15.157 ], 00:54:15.157 "core_count": 1 00:54:15.157 } 00:54:15.157 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82165 00:54:15.157 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82165 ']' 00:54:15.157 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82165 00:54:15.157 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:54:15.157 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:15.157 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82165 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82165' 00:54:15.415 killing process with pid 82165 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82165 00:54:15.415 Received shutdown signal, test time was about 10.000000 seconds 00:54:15.415 00:54:15.415 Latency(us) 00:54:15.415 [2024-12-09T10:06:22.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:15.415 [2024-12-09T10:06:22.917Z] =================================================================================================================== 00:54:15.415 [2024-12-09T10:06:22.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82165 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82410 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82410 /var/tmp/bdevperf.sock 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82410 ']' 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:15.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:15.415 10:06:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:54:15.415 [2024-12-09 10:06:22.910181] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:54:15.415 [2024-12-09 10:06:22.910404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82410 ] 00:54:15.674 [2024-12-09 10:06:23.058260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:15.674 [2024-12-09 10:06:23.091683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:15.674 [2024-12-09 10:06:23.122275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:54:15.932 10:06:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:15.932 10:06:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:54:15.932 10:06:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82413 00:54:15.932 10:06:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82410 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:54:15.932 10:06:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:54:16.191 10:06:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:54:16.449 NVMe0n1 00:54:16.449 10:06:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82459 00:54:16.449 10:06:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:54:16.449 10:06:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:54:16.707 Running I/O for 10 seconds... 00:54:17.642 10:06:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:17.902 14478.00 IOPS, 56.55 MiB/s [2024-12-09T10:06:25.404Z] [2024-12-09 10:06:25.162637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.162884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.902 [2024-12-09 10:06:25.163471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.163999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ce10 is same with the state(6) to be set 00:54:17.903 [2024-12-09 10:06:25.164299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.903 [2024-12-09 10:06:25.164903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.903 [2024-12-09 10:06:25.164915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.164925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.164937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.164947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.164971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.164984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.164997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.165987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.165999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.904 [2024-12-09 10:06:25.166630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.904 [2024-12-09 10:06:25.166640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.166983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.166995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.167005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.167019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.167029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.167040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.167050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.167064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.167285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.167368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.167572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.167717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.167858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.167934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.168102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.168168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:17.905 [2024-12-09 10:06:25.168303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.168367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2509920 is same with the state(6) to be set 00:54:17.905 [2024-12-09 10:06:25.168538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:54:17.905 [2024-12-09 10:06:25.168642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:54:17.905 [2024-12-09 10:06:25.168689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104400 len:8 PRP1 0x0 PRP2 0x0 00:54:17.905 [2024-12-09 10:06:25.168807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.169003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:54:17.905 [2024-12-09 10:06:25.169084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.169213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:54:17.905 [2024-12-09 10:06:25.169273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.169402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:54:17.905 [2024-12-09 10:06:25.169501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.169517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:54:17.905 [2024-12-09 10:06:25.169527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:17.905 [2024-12-09 10:06:25.169537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ce50 is same with the state(6) to be set 00:54:17.905 [2024-12-09 10:06:25.169823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:54:17.905 [2024-12-09 10:06:25.169853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249ce50 (9): Bad file descriptor 00:54:17.905 [2024-12-09 10:06:25.169996] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:54:17.905 [2024-12-09 10:06:25.170024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249ce50 with addr=10.0.0.3, port=4420 00:54:17.905 [2024-12-09 10:06:25.170041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ce50 is same with the state(6) to be set 00:54:17.905 [2024-12-09 10:06:25.170064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249ce50 (9): Bad file descriptor 00:54:17.905 [2024-12-09 10:06:25.170082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:54:17.905 [2024-12-09 10:06:25.170092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:54:17.905 [2024-12-09 10:06:25.170106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:54:17.905 [2024-12-09 10:06:25.170119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:54:17.905 [2024-12-09 10:06:25.170130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:54:17.905 10:06:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82459 00:54:19.776 8319.50 IOPS, 32.50 MiB/s [2024-12-09T10:06:27.278Z] 5546.33 IOPS, 21.67 MiB/s [2024-12-09T10:06:27.278Z] [2024-12-09 10:06:27.188365] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:54:19.776 [2024-12-09 10:06:27.188599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249ce50 with addr=10.0.0.3, port=4420 00:54:19.776 [2024-12-09 10:06:27.188628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ce50 is same with the state(6) to be set 00:54:19.776 [2024-12-09 10:06:27.188663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249ce50 (9): Bad file descriptor 00:54:19.776 [2024-12-09 10:06:27.188684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:54:19.776 [2024-12-09 10:06:27.188695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:54:19.776 [2024-12-09 10:06:27.188706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:54:19.776 [2024-12-09 10:06:27.188720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:54:19.776 [2024-12-09 10:06:27.188732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:54:21.647 4159.75 IOPS, 16.25 MiB/s [2024-12-09T10:06:29.408Z] 3327.80 IOPS, 13.00 MiB/s [2024-12-09T10:06:29.408Z] [2024-12-09 10:06:29.188911] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:54:21.906 [2024-12-09 10:06:29.189007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249ce50 with addr=10.0.0.3, port=4420 00:54:21.906 [2024-12-09 10:06:29.189026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ce50 is same with the state(6) to be set 00:54:21.906 [2024-12-09 10:06:29.189054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249ce50 (9): Bad file descriptor 00:54:21.906 [2024-12-09 10:06:29.189075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:54:21.906 [2024-12-09 10:06:29.189087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:54:21.906 [2024-12-09 10:06:29.189098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:54:21.906 [2024-12-09 10:06:29.189110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:54:21.906 [2024-12-09 10:06:29.189122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:54:23.799 2773.17 IOPS, 10.83 MiB/s [2024-12-09T10:06:31.301Z] 2377.00 IOPS, 9.29 MiB/s [2024-12-09T10:06:31.301Z] [2024-12-09 10:06:31.189182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:54:23.799 [2024-12-09 10:06:31.189398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:54:23.799 [2024-12-09 10:06:31.189421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:54:23.799 [2024-12-09 10:06:31.189435] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:54:23.799 [2024-12-09 10:06:31.189449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:54:24.734 2079.88 IOPS, 8.12 MiB/s 00:54:24.734 Latency(us) 00:54:24.734 [2024-12-09T10:06:32.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:24.734 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:54:24.734 NVMe0n1 : 8.16 2038.15 7.96 15.68 0.00 62421.10 8043.05 7046430.72 00:54:24.734 [2024-12-09T10:06:32.236Z] =================================================================================================================== 00:54:24.734 [2024-12-09T10:06:32.237Z] Total : 2038.15 7.96 15.68 0.00 62421.10 8043.05 7046430.72 00:54:24.735 { 00:54:24.735 "results": [ 00:54:24.735 { 00:54:24.735 "job": "NVMe0n1", 00:54:24.735 "core_mask": "0x4", 00:54:24.735 "workload": "randread", 00:54:24.735 "status": "finished", 00:54:24.735 "queue_depth": 128, 00:54:24.735 "io_size": 4096, 00:54:24.735 "runtime": 8.163764, 00:54:24.735 "iops": 2038.1529892338876, 00:54:24.735 "mibps": 7.961535114194874, 00:54:24.735 "io_failed": 128, 00:54:24.735 "io_timeout": 0, 00:54:24.735 "avg_latency_us": 62421.09723320157, 00:54:24.735 "min_latency_us": 8043.054545454545, 00:54:24.735 "max_latency_us": 7046430.72 00:54:24.735 } 00:54:24.735 ], 00:54:24.735 "core_count": 1 00:54:24.735 } 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:54:24.735 Attaching 5 probes... 00:54:24.735 1432.591027: reset bdev controller NVMe0 00:54:24.735 1432.682762: reconnect bdev controller NVMe0 00:54:24.735 3451.011449: reconnect delay bdev controller NVMe0 00:54:24.735 3451.031468: reconnect bdev controller NVMe0 00:54:24.735 5451.546274: reconnect delay bdev controller NVMe0 00:54:24.735 5451.565928: reconnect bdev controller NVMe0 00:54:24.735 7451.935898: reconnect delay bdev controller NVMe0 00:54:24.735 7451.957360: reconnect bdev controller NVMe0 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82413 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82410 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82410 ']' 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82410 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:24.735 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82410 00:54:24.993 killing process with pid 82410 00:54:24.993 Received shutdown signal, test time was about 8.229157 seconds 00:54:24.993 00:54:24.993 Latency(us) 00:54:24.993 [2024-12-09T10:06:32.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:24.993 [2024-12-09T10:06:32.495Z] =================================================================================================================== 00:54:24.993 [2024-12-09T10:06:32.495Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:24.993 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:54:24.993 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:54:24.993 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82410' 00:54:24.993 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82410 00:54:24.993 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82410 00:54:24.993 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:25.251 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:54:25.251 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:54:25.251 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:25.251 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:25.510 rmmod nvme_tcp 00:54:25.510 rmmod nvme_fabrics 00:54:25.510 rmmod nvme_keyring 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81970 ']' 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81970 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81970 ']' 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81970 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81970 00:54:25.510 killing process with pid 81970 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81970' 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81970 00:54:25.510 10:06:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81970 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:25.769 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:54:26.027 ************************************ 00:54:26.027 END TEST nvmf_timeout 00:54:26.027 ************************************ 00:54:26.027 00:54:26.027 real 0m46.916s 00:54:26.027 user 2m17.354s 00:54:26.027 sys 0m5.570s 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:54:26.027 ************************************ 00:54:26.027 END TEST nvmf_host 00:54:26.027 ************************************ 00:54:26.027 00:54:26.027 real 5m6.383s 00:54:26.027 user 13m24.389s 00:54:26.027 sys 1m6.754s 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:26.027 10:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:54:26.027 10:06:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:54:26.027 10:06:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:54:26.027 ************************************ 00:54:26.027 END TEST nvmf_tcp 00:54:26.027 ************************************ 00:54:26.027 00:54:26.027 real 13m2.188s 00:54:26.027 user 31m36.134s 00:54:26.027 sys 3m6.549s 00:54:26.028 10:06:33 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:26.028 10:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:26.028 10:06:33 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:54:26.028 10:06:33 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:54:26.028 10:06:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:26.028 10:06:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:26.028 10:06:33 -- common/autotest_common.sh@10 -- # set +x 00:54:26.028 ************************************ 00:54:26.028 START TEST nvmf_dif 00:54:26.028 ************************************ 00:54:26.028 10:06:33 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:54:26.028 * Looking for test storage... 00:54:26.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:54:26.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:26.288 --rc genhtml_branch_coverage=1 00:54:26.288 --rc genhtml_function_coverage=1 00:54:26.288 --rc genhtml_legend=1 00:54:26.288 --rc geninfo_all_blocks=1 00:54:26.288 --rc geninfo_unexecuted_blocks=1 00:54:26.288 00:54:26.288 ' 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:54:26.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:26.288 --rc genhtml_branch_coverage=1 00:54:26.288 --rc genhtml_function_coverage=1 00:54:26.288 --rc genhtml_legend=1 00:54:26.288 --rc geninfo_all_blocks=1 00:54:26.288 --rc geninfo_unexecuted_blocks=1 00:54:26.288 00:54:26.288 ' 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:54:26.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:26.288 --rc genhtml_branch_coverage=1 00:54:26.288 --rc genhtml_function_coverage=1 00:54:26.288 --rc genhtml_legend=1 00:54:26.288 --rc geninfo_all_blocks=1 00:54:26.288 --rc geninfo_unexecuted_blocks=1 00:54:26.288 00:54:26.288 ' 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:54:26.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:26.288 --rc genhtml_branch_coverage=1 00:54:26.288 --rc genhtml_function_coverage=1 00:54:26.288 --rc genhtml_legend=1 00:54:26.288 --rc geninfo_all_blocks=1 00:54:26.288 --rc geninfo_unexecuted_blocks=1 00:54:26.288 00:54:26.288 ' 00:54:26.288 10:06:33 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:26.288 10:06:33 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:26.288 10:06:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:26.288 10:06:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:26.288 10:06:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:26.288 10:06:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:54:26.288 10:06:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:54:26.288 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:26.288 10:06:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:54:26.288 10:06:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:54:26.288 10:06:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:54:26.288 10:06:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:54:26.288 10:06:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:54:26.288 10:06:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:26.288 10:06:33 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:54:26.289 Cannot find device "nvmf_init_br" 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@162 -- # true 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:54:26.289 Cannot find device "nvmf_init_br2" 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@163 -- # true 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:54:26.289 Cannot find device "nvmf_tgt_br" 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@164 -- # true 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:54:26.289 Cannot find device "nvmf_tgt_br2" 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@165 -- # true 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:54:26.289 Cannot find device "nvmf_init_br" 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@166 -- # true 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:54:26.289 Cannot find device "nvmf_init_br2" 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@167 -- # true 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:54:26.289 Cannot find device "nvmf_tgt_br" 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@168 -- # true 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:54:26.289 Cannot find device "nvmf_tgt_br2" 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@169 -- # true 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:54:26.289 Cannot find device "nvmf_br" 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@170 -- # true 00:54:26.289 10:06:33 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:54:26.547 Cannot find device "nvmf_init_if" 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@171 -- # true 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:54:26.547 Cannot find device "nvmf_init_if2" 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@172 -- # true 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:26.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@173 -- # true 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:26.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@174 -- # true 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:26.547 10:06:33 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:26.548 10:06:33 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:26.548 10:06:33 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:54:26.548 10:06:33 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:54:26.548 10:06:33 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:54:26.548 10:06:33 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:54:26.548 10:06:33 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:26.548 10:06:33 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:26.548 10:06:33 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:26.548 10:06:33 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:54:26.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:26.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:54:26.548 00:54:26.548 --- 10.0.0.3 ping statistics --- 00:54:26.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:26.548 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:54:26.548 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:54:26.548 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:54:26.548 00:54:26.548 --- 10.0.0.4 ping statistics --- 00:54:26.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:26.548 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:26.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:26.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:54:26.548 00:54:26.548 --- 10.0.0.1 ping statistics --- 00:54:26.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:26.548 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:54:26.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:26.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:54:26.548 00:54:26.548 --- 10.0.0.2 ping statistics --- 00:54:26.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:26.548 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:54:26.548 10:06:34 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:54:27.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:54:27.113 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:54:27.113 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:54:27.113 10:06:34 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:27.114 10:06:34 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:27.114 10:06:34 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:27.114 10:06:34 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:27.114 10:06:34 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:27.114 10:06:34 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:27.114 10:06:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:54:27.114 10:06:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:54:27.114 10:06:34 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:27.114 10:06:34 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:27.114 10:06:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:27.114 10:06:34 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82946 00:54:27.114 10:06:34 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:54:27.114 10:06:34 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82946 00:54:27.114 10:06:34 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82946 ']' 00:54:27.114 10:06:34 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:27.114 10:06:34 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:27.114 10:06:34 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:27.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:27.114 10:06:34 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:27.114 10:06:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:27.114 [2024-12-09 10:06:34.519921] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:54:27.114 [2024-12-09 10:06:34.520272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:27.372 [2024-12-09 10:06:34.675867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:27.372 [2024-12-09 10:06:34.715162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:27.372 [2024-12-09 10:06:34.715221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:27.372 [2024-12-09 10:06:34.715236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:27.372 [2024-12-09 10:06:34.715247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:27.372 [2024-12-09 10:06:34.715256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:27.372 [2024-12-09 10:06:34.715615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:27.372 [2024-12-09 10:06:34.750467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:54:27.372 10:06:34 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:27.372 10:06:34 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:27.372 10:06:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:54:27.372 10:06:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:27.372 [2024-12-09 10:06:34.855036] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:27.372 10:06:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:27.372 10:06:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:27.372 ************************************ 00:54:27.372 START TEST fio_dif_1_default 00:54:27.372 ************************************ 00:54:27.372 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:54:27.372 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:54:27.372 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:54:27.372 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:54:27.372 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:54:27.372 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:54:27.372 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:27.631 bdev_null0 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:27.631 [2024-12-09 10:06:34.903139] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:54:27.631 { 00:54:27.631 "params": { 00:54:27.631 "name": "Nvme$subsystem", 00:54:27.631 "trtype": "$TEST_TRANSPORT", 00:54:27.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:27.631 "adrfam": "ipv4", 00:54:27.631 "trsvcid": "$NVMF_PORT", 00:54:27.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:27.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:27.631 "hdgst": ${hdgst:-false}, 00:54:27.631 "ddgst": ${ddgst:-false} 00:54:27.631 }, 00:54:27.631 "method": "bdev_nvme_attach_controller" 00:54:27.631 } 00:54:27.631 EOF 00:54:27.631 )") 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:54:27.631 "params": { 00:54:27.631 "name": "Nvme0", 00:54:27.631 "trtype": "tcp", 00:54:27.631 "traddr": "10.0.0.3", 00:54:27.631 "adrfam": "ipv4", 00:54:27.631 "trsvcid": "4420", 00:54:27.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:27.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:27.631 "hdgst": false, 00:54:27.631 "ddgst": false 00:54:27.631 }, 00:54:27.631 "method": "bdev_nvme_attach_controller" 00:54:27.631 }' 00:54:27.631 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:27.632 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:27.632 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:27.632 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:27.632 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:54:27.632 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:27.632 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:27.632 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:27.632 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:27.632 10:06:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:27.890 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:54:27.890 fio-3.35 00:54:27.890 Starting 1 thread 00:54:40.093 00:54:40.093 filename0: (groupid=0, jobs=1): err= 0: pid=83005: Mon Dec 9 10:06:45 2024 00:54:40.093 read: IOPS=8362, BW=32.7MiB/s (34.3MB/s)(327MiB/10001msec) 00:54:40.093 slat (usec): min=6, max=132, avg= 9.02, stdev= 3.76 00:54:40.093 clat (usec): min=351, max=2650, avg=451.86, stdev=42.49 00:54:40.093 lat (usec): min=358, max=2678, avg=460.89, stdev=43.20 00:54:40.093 clat percentiles (usec): 00:54:40.093 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 400], 20.00th=[ 420], 00:54:40.093 | 30.00th=[ 433], 40.00th=[ 441], 50.00th=[ 453], 60.00th=[ 461], 00:54:40.093 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 502], 95.00th=[ 519], 00:54:40.093 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 586], 99.95th=[ 603], 00:54:40.093 | 99.99th=[ 693] 00:54:40.093 bw ( KiB/s): min=32128, max=34816, per=100.00%, avg=33487.16, stdev=634.59, samples=19 00:54:40.093 iops : min= 8032, max= 8704, avg=8371.79, stdev=158.65, samples=19 00:54:40.093 lat (usec) : 500=89.49%, 750=10.50% 00:54:40.093 lat (msec) : 2=0.01%, 4=0.01% 00:54:40.093 cpu : usr=83.57%, sys=14.45%, ctx=14, majf=0, minf=9 00:54:40.093 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:40.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:40.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:40.093 issued rwts: total=83632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:40.093 latency : target=0, window=0, percentile=100.00%, depth=4 00:54:40.093 00:54:40.093 Run status group 0 (all jobs): 00:54:40.093 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=327MiB (343MB), run=10001-10001msec 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 ************************************ 00:54:40.093 END TEST fio_dif_1_default 00:54:40.093 ************************************ 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 00:54:40.093 real 0m10.999s 00:54:40.093 user 0m9.048s 00:54:40.093 sys 0m1.670s 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 10:06:45 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:54:40.093 10:06:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:40.093 10:06:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 ************************************ 00:54:40.093 START TEST fio_dif_1_multi_subsystems 00:54:40.093 ************************************ 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 bdev_null0 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 [2024-12-09 10:06:45.949474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 bdev_null1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:54:40.093 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:54:40.094 { 00:54:40.094 "params": { 00:54:40.094 "name": "Nvme$subsystem", 00:54:40.094 "trtype": "$TEST_TRANSPORT", 00:54:40.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:40.094 "adrfam": "ipv4", 00:54:40.094 "trsvcid": "$NVMF_PORT", 00:54:40.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:40.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:40.094 "hdgst": ${hdgst:-false}, 00:54:40.094 "ddgst": ${ddgst:-false} 00:54:40.094 }, 00:54:40.094 "method": "bdev_nvme_attach_controller" 00:54:40.094 } 00:54:40.094 EOF 00:54:40.094 )") 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:54:40.094 10:06:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:54:40.094 { 00:54:40.094 "params": { 00:54:40.094 "name": "Nvme$subsystem", 00:54:40.094 "trtype": "$TEST_TRANSPORT", 00:54:40.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:40.094 "adrfam": "ipv4", 00:54:40.094 "trsvcid": "$NVMF_PORT", 00:54:40.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:40.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:40.094 "hdgst": ${hdgst:-false}, 00:54:40.094 "ddgst": ${ddgst:-false} 00:54:40.094 }, 00:54:40.094 "method": "bdev_nvme_attach_controller" 00:54:40.094 } 00:54:40.094 EOF 00:54:40.094 )") 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:54:40.094 "params": { 00:54:40.094 "name": "Nvme0", 00:54:40.094 "trtype": "tcp", 00:54:40.094 "traddr": "10.0.0.3", 00:54:40.094 "adrfam": "ipv4", 00:54:40.094 "trsvcid": "4420", 00:54:40.094 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:40.094 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:40.094 "hdgst": false, 00:54:40.094 "ddgst": false 00:54:40.094 }, 00:54:40.094 "method": "bdev_nvme_attach_controller" 00:54:40.094 },{ 00:54:40.094 "params": { 00:54:40.094 "name": "Nvme1", 00:54:40.094 "trtype": "tcp", 00:54:40.094 "traddr": "10.0.0.3", 00:54:40.094 "adrfam": "ipv4", 00:54:40.094 "trsvcid": "4420", 00:54:40.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:40.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:40.094 "hdgst": false, 00:54:40.094 "ddgst": false 00:54:40.094 }, 00:54:40.094 "method": "bdev_nvme_attach_controller" 00:54:40.094 }' 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:40.094 10:06:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:40.094 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:54:40.094 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:54:40.094 fio-3.35 00:54:40.094 Starting 2 threads 00:54:50.110 00:54:50.110 filename0: (groupid=0, jobs=1): err= 0: pid=83165: Mon Dec 9 10:06:56 2024 00:54:50.110 read: IOPS=4576, BW=17.9MiB/s (18.7MB/s)(179MiB/10001msec) 00:54:50.110 slat (nsec): min=7364, max=57638, avg=14145.60, stdev=4719.77 00:54:50.110 clat (usec): min=646, max=3032, avg=835.91, stdev=53.55 00:54:50.110 lat (usec): min=656, max=3054, avg=850.06, stdev=54.63 00:54:50.110 clat percentiles (usec): 00:54:50.110 | 1.00th=[ 717], 5.00th=[ 750], 10.00th=[ 775], 20.00th=[ 799], 00:54:50.110 | 30.00th=[ 816], 40.00th=[ 824], 50.00th=[ 840], 60.00th=[ 848], 00:54:50.110 | 70.00th=[ 865], 80.00th=[ 873], 90.00th=[ 898], 95.00th=[ 914], 00:54:50.110 | 99.00th=[ 947], 99.50th=[ 955], 99.90th=[ 996], 99.95th=[ 1029], 00:54:50.110 | 99.99th=[ 1827] 00:54:50.110 bw ( KiB/s): min=18176, max=18560, per=50.01%, avg=18310.74, stdev=95.75, samples=19 00:54:50.110 iops : min= 4544, max= 4640, avg=4577.68, stdev=23.94, samples=19 00:54:50.110 lat (usec) : 750=4.84%, 1000=95.07% 00:54:50.110 lat (msec) : 2=0.08%, 4=0.01% 00:54:50.110 cpu : usr=90.01%, sys=8.66%, ctx=9, majf=0, minf=0 00:54:50.110 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:50.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:50.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:50.110 issued rwts: total=45768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:50.110 latency : target=0, window=0, percentile=100.00%, depth=4 00:54:50.110 filename1: (groupid=0, jobs=1): err= 0: pid=83166: Mon Dec 9 10:06:56 2024 00:54:50.110 read: IOPS=4576, BW=17.9MiB/s (18.7MB/s)(179MiB/10001msec) 00:54:50.110 slat (nsec): min=7155, max=60263, avg=14142.76, stdev=4809.41 00:54:50.110 clat (usec): min=689, max=3014, avg=834.95, stdev=45.80 00:54:50.111 lat (usec): min=703, max=3033, avg=849.10, stdev=46.25 00:54:50.111 clat percentiles (usec): 00:54:50.111 | 1.00th=[ 742], 5.00th=[ 766], 10.00th=[ 783], 20.00th=[ 799], 00:54:50.111 | 30.00th=[ 816], 40.00th=[ 824], 50.00th=[ 832], 60.00th=[ 848], 00:54:50.111 | 70.00th=[ 857], 80.00th=[ 865], 90.00th=[ 889], 95.00th=[ 898], 00:54:50.111 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 1004], 00:54:50.111 | 99.99th=[ 1975] 00:54:50.111 bw ( KiB/s): min=18176, max=18560, per=50.01%, avg=18310.74, stdev=95.75, samples=19 00:54:50.111 iops : min= 4544, max= 4640, avg=4577.68, stdev=23.94, samples=19 00:54:50.111 lat (usec) : 750=1.61%, 1000=98.34% 00:54:50.111 lat (msec) : 2=0.04%, 4=0.01% 00:54:50.111 cpu : usr=90.18%, sys=8.45%, ctx=19, majf=0, minf=0 00:54:50.111 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:50.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:50.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:50.111 issued rwts: total=45768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:50.111 latency : target=0, window=0, percentile=100.00%, depth=4 00:54:50.111 00:54:50.111 Run status group 0 (all jobs): 00:54:50.111 READ: bw=35.8MiB/s (37.5MB/s), 17.9MiB/s-17.9MiB/s (18.7MB/s-18.7MB/s), io=358MiB (375MB), run=10001-10001msec 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 ************************************ 00:54:50.111 END TEST fio_dif_1_multi_subsystems 00:54:50.111 ************************************ 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:50.111 00:54:50.111 real 0m11.127s 00:54:50.111 user 0m18.814s 00:54:50.111 sys 0m1.943s 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 10:06:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:54:50.111 10:06:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:50.111 10:06:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 ************************************ 00:54:50.111 START TEST fio_dif_rand_params 00:54:50.111 ************************************ 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 bdev_null0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:50.111 [2024-12-09 10:06:57.128005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:54:50.111 { 00:54:50.111 "params": { 00:54:50.111 "name": "Nvme$subsystem", 00:54:50.111 "trtype": "$TEST_TRANSPORT", 00:54:50.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:50.111 "adrfam": "ipv4", 00:54:50.111 "trsvcid": "$NVMF_PORT", 00:54:50.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:50.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:50.111 "hdgst": ${hdgst:-false}, 00:54:50.111 "ddgst": ${ddgst:-false} 00:54:50.111 }, 00:54:50.111 "method": "bdev_nvme_attach_controller" 00:54:50.111 } 00:54:50.111 EOF 00:54:50.111 )") 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:54:50.111 10:06:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:54:50.111 "params": { 00:54:50.111 "name": "Nvme0", 00:54:50.111 "trtype": "tcp", 00:54:50.111 "traddr": "10.0.0.3", 00:54:50.111 "adrfam": "ipv4", 00:54:50.111 "trsvcid": "4420", 00:54:50.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:50.111 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:50.111 "hdgst": false, 00:54:50.111 "ddgst": false 00:54:50.111 }, 00:54:50.112 "method": "bdev_nvme_attach_controller" 00:54:50.112 }' 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:50.112 10:06:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:50.112 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:54:50.112 ... 00:54:50.112 fio-3.35 00:54:50.112 Starting 3 threads 00:54:55.380 00:54:55.380 filename0: (groupid=0, jobs=1): err= 0: pid=83322: Mon Dec 9 10:07:02 2024 00:54:55.380 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(152MiB/5012msec) 00:54:55.380 slat (nsec): min=7601, max=53107, avg=11428.08, stdev=5506.68 00:54:55.380 clat (usec): min=4855, max=13849, avg=12344.55, stdev=431.83 00:54:55.380 lat (usec): min=4864, max=13864, avg=12355.97, stdev=431.94 00:54:55.380 clat percentiles (usec): 00:54:55.380 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12125], 20.00th=[12256], 00:54:55.380 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12256], 60.00th=[12387], 00:54:55.380 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12649], 95.00th=[12780], 00:54:55.380 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13829], 99.95th=[13829], 00:54:55.380 | 99.99th=[13829] 00:54:55.380 bw ( KiB/s): min=30720, max=31488, per=33.39%, avg=31027.20, stdev=396.59, samples=10 00:54:55.380 iops : min= 240, max= 246, avg=242.40, stdev= 3.10, samples=10 00:54:55.380 lat (msec) : 10=0.25%, 20=99.75% 00:54:55.380 cpu : usr=91.30%, sys=8.10%, ctx=49, majf=0, minf=0 00:54:55.380 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:55.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:55.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:55.380 issued rwts: total=1215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:55.380 latency : target=0, window=0, percentile=100.00%, depth=3 00:54:55.380 filename0: (groupid=0, jobs=1): err= 0: pid=83323: Mon Dec 9 10:07:02 2024 00:54:55.380 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(152MiB/5007msec) 00:54:55.380 slat (nsec): min=8399, max=46360, avg=15465.49, stdev=3749.64 00:54:55.380 clat (usec): min=10499, max=14594, avg=12358.54, stdev=272.04 00:54:55.380 lat (usec): min=10514, max=14621, avg=12374.00, stdev=272.41 00:54:55.380 clat percentiles (usec): 00:54:55.380 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12125], 20.00th=[12256], 00:54:55.380 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12256], 60.00th=[12387], 00:54:55.380 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12649], 95.00th=[12780], 00:54:55.380 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14615], 99.95th=[14615], 00:54:55.380 | 99.99th=[14615] 00:54:55.380 bw ( KiB/s): min=30720, max=31488, per=33.30%, avg=30950.40, stdev=370.98, samples=10 00:54:55.380 iops : min= 240, max= 246, avg=241.80, stdev= 2.90, samples=10 00:54:55.380 lat (msec) : 20=100.00% 00:54:55.380 cpu : usr=91.71%, sys=7.69%, ctx=10, majf=0, minf=0 00:54:55.380 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:55.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:55.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:55.380 issued rwts: total=1212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:55.380 latency : target=0, window=0, percentile=100.00%, depth=3 00:54:55.380 filename0: (groupid=0, jobs=1): err= 0: pid=83324: Mon Dec 9 10:07:02 2024 00:54:55.380 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(152MiB/5007msec) 00:54:55.380 slat (nsec): min=8272, max=41893, avg=15442.57, stdev=3440.08 00:54:55.380 clat (usec): min=10497, max=14269, avg=12357.29, stdev=266.67 00:54:55.380 lat (usec): min=10512, max=14295, avg=12372.73, stdev=266.78 00:54:55.380 clat percentiles (usec): 00:54:55.380 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12125], 20.00th=[12256], 00:54:55.380 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12256], 60.00th=[12387], 00:54:55.380 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12649], 95.00th=[12780], 00:54:55.380 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14222], 99.95th=[14222], 00:54:55.380 | 99.99th=[14222] 00:54:55.380 bw ( KiB/s): min=30720, max=31488, per=33.31%, avg=30956.50, stdev=367.25, samples=10 00:54:55.380 iops : min= 240, max= 246, avg=241.80, stdev= 2.90, samples=10 00:54:55.380 lat (msec) : 20=100.00% 00:54:55.380 cpu : usr=91.81%, sys=7.61%, ctx=6, majf=0, minf=0 00:54:55.380 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:55.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:55.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:55.380 issued rwts: total=1212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:55.380 latency : target=0, window=0, percentile=100.00%, depth=3 00:54:55.380 00:54:55.380 Run status group 0 (all jobs): 00:54:55.380 READ: bw=90.8MiB/s (95.2MB/s), 30.3MiB/s-30.3MiB/s (31.7MB/s-31.8MB/s), io=455MiB (477MB), run=5007-5012msec 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.639 bdev_null0 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.639 [2024-12-09 10:07:03.114225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.639 bdev_null1 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.639 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.899 bdev_null2 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:54:55.899 { 00:54:55.899 "params": { 00:54:55.899 "name": "Nvme$subsystem", 00:54:55.899 "trtype": "$TEST_TRANSPORT", 00:54:55.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:55.899 "adrfam": "ipv4", 00:54:55.899 "trsvcid": "$NVMF_PORT", 00:54:55.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:55.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:55.899 "hdgst": ${hdgst:-false}, 00:54:55.899 "ddgst": ${ddgst:-false} 00:54:55.899 }, 00:54:55.899 "method": "bdev_nvme_attach_controller" 00:54:55.899 } 00:54:55.899 EOF 00:54:55.899 )") 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:54:55.899 { 00:54:55.899 "params": { 00:54:55.899 "name": "Nvme$subsystem", 00:54:55.899 "trtype": "$TEST_TRANSPORT", 00:54:55.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:55.899 "adrfam": "ipv4", 00:54:55.899 "trsvcid": "$NVMF_PORT", 00:54:55.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:55.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:55.899 "hdgst": ${hdgst:-false}, 00:54:55.899 "ddgst": ${ddgst:-false} 00:54:55.899 }, 00:54:55.899 "method": "bdev_nvme_attach_controller" 00:54:55.899 } 00:54:55.899 EOF 00:54:55.899 )") 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:54:55.899 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:54:55.900 { 00:54:55.900 "params": { 00:54:55.900 "name": "Nvme$subsystem", 00:54:55.900 "trtype": "$TEST_TRANSPORT", 00:54:55.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:55.900 "adrfam": "ipv4", 00:54:55.900 "trsvcid": "$NVMF_PORT", 00:54:55.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:55.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:55.900 "hdgst": ${hdgst:-false}, 00:54:55.900 "ddgst": ${ddgst:-false} 00:54:55.900 }, 00:54:55.900 "method": "bdev_nvme_attach_controller" 00:54:55.900 } 00:54:55.900 EOF 00:54:55.900 )") 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:54:55.900 "params": { 00:54:55.900 "name": "Nvme0", 00:54:55.900 "trtype": "tcp", 00:54:55.900 "traddr": "10.0.0.3", 00:54:55.900 "adrfam": "ipv4", 00:54:55.900 "trsvcid": "4420", 00:54:55.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:55.900 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:55.900 "hdgst": false, 00:54:55.900 "ddgst": false 00:54:55.900 }, 00:54:55.900 "method": "bdev_nvme_attach_controller" 00:54:55.900 },{ 00:54:55.900 "params": { 00:54:55.900 "name": "Nvme1", 00:54:55.900 "trtype": "tcp", 00:54:55.900 "traddr": "10.0.0.3", 00:54:55.900 "adrfam": "ipv4", 00:54:55.900 "trsvcid": "4420", 00:54:55.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:55.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:55.900 "hdgst": false, 00:54:55.900 "ddgst": false 00:54:55.900 }, 00:54:55.900 "method": "bdev_nvme_attach_controller" 00:54:55.900 },{ 00:54:55.900 "params": { 00:54:55.900 "name": "Nvme2", 00:54:55.900 "trtype": "tcp", 00:54:55.900 "traddr": "10.0.0.3", 00:54:55.900 "adrfam": "ipv4", 00:54:55.900 "trsvcid": "4420", 00:54:55.900 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:54:55.900 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:54:55.900 "hdgst": false, 00:54:55.900 "ddgst": false 00:54:55.900 }, 00:54:55.900 "method": "bdev_nvme_attach_controller" 00:54:55.900 }' 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:55.900 10:07:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:56.169 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:54:56.169 ... 00:54:56.169 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:54:56.169 ... 00:54:56.169 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:54:56.169 ... 00:54:56.169 fio-3.35 00:54:56.169 Starting 24 threads 00:55:08.401 00:55:08.401 filename0: (groupid=0, jobs=1): err= 0: pid=83419: Mon Dec 9 10:07:14 2024 00:55:08.401 read: IOPS=218, BW=875KiB/s (896kB/s)(8788KiB/10044msec) 00:55:08.401 slat (usec): min=5, max=4042, avg=18.79, stdev=121.36 00:55:08.401 clat (msec): min=7, max=134, avg=73.02, stdev=22.03 00:55:08.401 lat (msec): min=7, max=134, avg=73.04, stdev=22.02 00:55:08.401 clat percentiles (msec): 00:55:08.401 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:55:08.401 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 79], 00:55:08.401 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 109], 00:55:08.401 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 125], 99.95th=[ 131], 00:55:08.401 | 99.99th=[ 136] 00:55:08.401 bw ( KiB/s): min= 664, max= 1445, per=4.22%, avg=872.25, stdev=161.21, samples=20 00:55:08.401 iops : min= 166, max= 361, avg=218.05, stdev=40.26, samples=20 00:55:08.401 lat (msec) : 10=0.73%, 20=2.18%, 50=11.15%, 100=74.01%, 250=11.93% 00:55:08.401 cpu : usr=38.23%, sys=2.47%, ctx=1377, majf=0, minf=9 00:55:08.401 IO depths : 1=0.1%, 2=0.5%, 4=1.6%, 8=81.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:55:08.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.401 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.401 issued rwts: total=2197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.401 filename0: (groupid=0, jobs=1): err= 0: pid=83420: Mon Dec 9 10:07:14 2024 00:55:08.401 read: IOPS=206, BW=828KiB/s (847kB/s)(8276KiB/10001msec) 00:55:08.401 slat (usec): min=4, max=8026, avg=24.16, stdev=272.26 00:55:08.401 clat (msec): min=6, max=167, avg=77.23, stdev=24.73 00:55:08.401 lat (msec): min=6, max=167, avg=77.26, stdev=24.73 00:55:08.401 clat percentiles (msec): 00:55:08.401 | 1.00th=[ 9], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 56], 00:55:08.401 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:55:08.401 | 70.00th=[ 85], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 118], 00:55:08.401 | 99.00th=[ 148], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 167], 00:55:08.401 | 99.99th=[ 167] 00:55:08.401 bw ( KiB/s): min= 496, max= 1024, per=3.88%, avg=802.53, stdev=163.74, samples=19 00:55:08.401 iops : min= 124, max= 256, avg=200.63, stdev=40.93, samples=19 00:55:08.401 lat (msec) : 10=1.55%, 20=0.14%, 50=14.21%, 100=64.43%, 250=19.67% 00:55:08.401 cpu : usr=35.82%, sys=2.27%, ctx=1106, majf=0, minf=9 00:55:08.401 IO depths : 1=0.1%, 2=2.6%, 4=10.2%, 8=72.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:55:08.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.401 complete : 0=0.0%, 4=89.8%, 8=7.9%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.401 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.401 filename0: (groupid=0, jobs=1): err= 0: pid=83421: Mon Dec 9 10:07:14 2024 00:55:08.401 read: IOPS=212, BW=850KiB/s (870kB/s)(8504KiB/10009msec) 00:55:08.401 slat (usec): min=4, max=9030, avg=30.21, stdev=351.66 00:55:08.401 clat (msec): min=12, max=155, avg=75.20, stdev=22.10 00:55:08.401 lat (msec): min=12, max=156, avg=75.23, stdev=22.09 00:55:08.401 clat percentiles (msec): 00:55:08.401 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 56], 00:55:08.401 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 79], 00:55:08.401 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 121], 00:55:08.401 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 157], 00:55:08.401 | 99.99th=[ 157] 00:55:08.401 bw ( KiB/s): min= 464, max= 1040, per=4.09%, avg=846.40, stdev=136.25, samples=20 00:55:08.401 iops : min= 116, max= 260, avg=211.60, stdev=34.06, samples=20 00:55:08.401 lat (msec) : 20=0.75%, 50=14.02%, 100=72.30%, 250=12.94% 00:55:08.401 cpu : usr=33.20%, sys=1.76%, ctx=1049, majf=0, minf=9 00:55:08.401 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:55:08.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.401 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.401 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.401 filename0: (groupid=0, jobs=1): err= 0: pid=83422: Mon Dec 9 10:07:14 2024 00:55:08.401 read: IOPS=223, BW=894KiB/s (916kB/s)(8992KiB/10056msec) 00:55:08.401 slat (usec): min=6, max=4022, avg=15.37, stdev=84.72 00:55:08.401 clat (usec): min=1527, max=139816, avg=71375.47, stdev=26464.32 00:55:08.401 lat (usec): min=1535, max=139831, avg=71390.84, stdev=26463.20 00:55:08.401 clat percentiles (usec): 00:55:08.401 | 1.00th=[ 1663], 5.00th=[ 10683], 10.00th=[ 46400], 20.00th=[ 53216], 00:55:08.401 | 30.00th=[ 63701], 40.00th=[ 70779], 50.00th=[ 72877], 60.00th=[ 79168], 00:55:08.401 | 70.00th=[ 82314], 80.00th=[ 87557], 90.00th=[105382], 95.00th=[111674], 00:55:08.401 | 99.00th=[122160], 99.50th=[139461], 99.90th=[139461], 99.95th=[139461], 00:55:08.401 | 99.99th=[139461] 00:55:08.401 bw ( KiB/s): min= 624, max= 2171, per=4.31%, avg=892.45, stdev=318.61, samples=20 00:55:08.401 iops : min= 156, max= 542, avg=223.05, stdev=79.52, samples=20 00:55:08.401 lat (msec) : 2=2.76%, 4=0.80%, 10=0.71%, 20=2.89%, 50=9.25% 00:55:08.401 lat (msec) : 100=70.60%, 250=12.99% 00:55:08.401 cpu : usr=45.74%, sys=2.40%, ctx=1090, majf=0, minf=9 00:55:08.401 IO depths : 1=0.2%, 2=1.4%, 4=4.9%, 8=77.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:55:08.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.401 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.401 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.401 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.401 filename0: (groupid=0, jobs=1): err= 0: pid=83423: Mon Dec 9 10:07:14 2024 00:55:08.401 read: IOPS=208, BW=832KiB/s (852kB/s)(8360KiB/10043msec) 00:55:08.401 slat (usec): min=6, max=8033, avg=25.91, stdev=280.46 00:55:08.401 clat (msec): min=8, max=148, avg=76.74, stdev=21.11 00:55:08.401 lat (msec): min=8, max=148, avg=76.76, stdev=21.11 00:55:08.401 clat percentiles (msec): 00:55:08.401 | 1.00th=[ 14], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 61], 00:55:08.401 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 81], 00:55:08.401 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 113], 00:55:08.401 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 00:55:08.401 | 99.99th=[ 148] 00:55:08.401 bw ( KiB/s): min= 608, max= 1280, per=4.01%, avg=829.60, stdev=143.58, samples=20 00:55:08.401 iops : min= 152, max= 320, avg=207.40, stdev=35.90, samples=20 00:55:08.402 lat (msec) : 10=0.10%, 20=2.11%, 50=6.12%, 100=77.37%, 250=14.31% 00:55:08.402 cpu : usr=37.13%, sys=2.36%, ctx=1388, majf=0, minf=9 00:55:08.402 IO depths : 1=0.1%, 2=0.5%, 4=2.2%, 8=80.3%, 16=16.9%, 32=0.0%, >=64=0.0% 00:55:08.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 complete : 0=0.0%, 4=88.5%, 8=11.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.402 filename0: (groupid=0, jobs=1): err= 0: pid=83424: Mon Dec 9 10:07:14 2024 00:55:08.402 read: IOPS=223, BW=895KiB/s (916kB/s)(8952KiB/10003msec) 00:55:08.402 slat (usec): min=8, max=11034, avg=33.54, stdev=410.80 00:55:08.402 clat (msec): min=2, max=156, avg=71.39, stdev=23.42 00:55:08.402 lat (msec): min=2, max=156, avg=71.42, stdev=23.43 00:55:08.402 clat percentiles (msec): 00:55:08.402 | 1.00th=[ 7], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:55:08.402 | 30.00th=[ 60], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:55:08.402 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 110], 00:55:08.402 | 99.00th=[ 127], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:55:08.402 | 99.99th=[ 157] 00:55:08.402 bw ( KiB/s): min= 520, max= 1024, per=4.18%, avg=865.68, stdev=137.52, samples=19 00:55:08.402 iops : min= 130, max= 256, avg=216.42, stdev=34.38, samples=19 00:55:08.402 lat (msec) : 4=0.40%, 10=1.74%, 20=0.71%, 50=18.32%, 100=67.43% 00:55:08.402 lat (msec) : 250=11.39% 00:55:08.402 cpu : usr=31.64%, sys=1.69%, ctx=902, majf=0, minf=9 00:55:08.402 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:55:08.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.402 filename0: (groupid=0, jobs=1): err= 0: pid=83425: Mon Dec 9 10:07:14 2024 00:55:08.402 read: IOPS=218, BW=874KiB/s (895kB/s)(8768KiB/10032msec) 00:55:08.402 slat (usec): min=5, max=10023, avg=28.35, stdev=333.84 00:55:08.402 clat (msec): min=23, max=131, avg=73.07, stdev=19.69 00:55:08.402 lat (msec): min=23, max=131, avg=73.10, stdev=19.69 00:55:08.402 clat percentiles (msec): 00:55:08.402 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:55:08.402 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 78], 00:55:08.402 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 102], 95.00th=[ 110], 00:55:08.402 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 132], 00:55:08.402 | 99.99th=[ 132] 00:55:08.402 bw ( KiB/s): min= 640, max= 1134, per=4.21%, avg=871.90, stdev=116.16, samples=20 00:55:08.402 iops : min= 160, max= 283, avg=217.95, stdev=28.98, samples=20 00:55:08.402 lat (msec) : 50=13.59%, 100=75.59%, 250=10.81% 00:55:08.402 cpu : usr=38.26%, sys=2.25%, ctx=1201, majf=0, minf=9 00:55:08.402 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:55:08.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.402 filename0: (groupid=0, jobs=1): err= 0: pid=83426: Mon Dec 9 10:07:14 2024 00:55:08.402 read: IOPS=205, BW=824KiB/s (843kB/s)(8252KiB/10020msec) 00:55:08.402 slat (usec): min=3, max=4027, avg=16.62, stdev=88.49 00:55:08.402 clat (msec): min=26, max=167, avg=77.58, stdev=22.26 00:55:08.402 lat (msec): min=26, max=167, avg=77.59, stdev=22.26 00:55:08.402 clat percentiles (msec): 00:55:08.402 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:55:08.402 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:55:08.402 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 115], 00:55:08.402 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 167], 00:55:08.402 | 99.99th=[ 167] 00:55:08.402 bw ( KiB/s): min= 512, max= 1010, per=3.97%, avg=821.30, stdev=150.28, samples=20 00:55:08.402 iops : min= 128, max= 252, avg=205.30, stdev=37.54, samples=20 00:55:08.402 lat (msec) : 50=13.91%, 100=69.95%, 250=16.14% 00:55:08.402 cpu : usr=35.72%, sys=1.91%, ctx=1047, majf=0, minf=9 00:55:08.402 IO depths : 1=0.1%, 2=1.9%, 4=7.7%, 8=75.1%, 16=15.2%, 32=0.0%, >=64=0.0% 00:55:08.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 complete : 0=0.0%, 4=89.4%, 8=9.0%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 issued rwts: total=2063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.402 filename1: (groupid=0, jobs=1): err= 0: pid=83427: Mon Dec 9 10:07:14 2024 00:55:08.402 read: IOPS=220, BW=881KiB/s (902kB/s)(8812KiB/10005msec) 00:55:08.402 slat (usec): min=4, max=4031, avg=16.98, stdev=85.70 00:55:08.402 clat (msec): min=6, max=169, avg=72.57, stdev=22.33 00:55:08.402 lat (msec): min=6, max=169, avg=72.59, stdev=22.33 00:55:08.402 clat percentiles (msec): 00:55:08.402 | 1.00th=[ 13], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:55:08.402 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:55:08.402 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 112], 00:55:08.402 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 169], 00:55:08.402 | 99.99th=[ 169] 00:55:08.402 bw ( KiB/s): min= 528, max= 1024, per=4.17%, avg=863.16, stdev=148.28, samples=19 00:55:08.402 iops : min= 132, max= 256, avg=215.79, stdev=37.07, samples=19 00:55:08.402 lat (msec) : 10=0.68%, 20=0.59%, 50=15.98%, 100=70.40%, 250=12.35% 00:55:08.402 cpu : usr=39.02%, sys=2.01%, ctx=1218, majf=0, minf=9 00:55:08.402 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:55:08.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.402 filename1: (groupid=0, jobs=1): err= 0: pid=83428: Mon Dec 9 10:07:14 2024 00:55:08.402 read: IOPS=216, BW=865KiB/s (886kB/s)(8692KiB/10045msec) 00:55:08.402 slat (usec): min=3, max=8026, avg=25.02, stdev=297.80 00:55:08.402 clat (msec): min=7, max=134, avg=73.83, stdev=21.42 00:55:08.402 lat (msec): min=7, max=134, avg=73.86, stdev=21.42 00:55:08.402 clat percentiles (msec): 00:55:08.402 | 1.00th=[ 11], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 58], 00:55:08.402 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:55:08.402 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 111], 00:55:08.402 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 133], 00:55:08.402 | 99.99th=[ 136] 00:55:08.402 bw ( KiB/s): min= 656, max= 1421, per=4.17%, avg=862.65, stdev=156.27, samples=20 00:55:08.402 iops : min= 164, max= 355, avg=215.65, stdev=39.02, samples=20 00:55:08.402 lat (msec) : 10=0.74%, 20=2.12%, 50=11.32%, 100=75.20%, 250=10.63% 00:55:08.402 cpu : usr=35.29%, sys=1.85%, ctx=1064, majf=0, minf=9 00:55:08.402 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:55:08.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.402 filename1: (groupid=0, jobs=1): err= 0: pid=83429: Mon Dec 9 10:07:14 2024 00:55:08.402 read: IOPS=217, BW=869KiB/s (890kB/s)(8708KiB/10019msec) 00:55:08.402 slat (usec): min=8, max=8038, avg=22.54, stdev=242.93 00:55:08.402 clat (msec): min=24, max=143, avg=73.50, stdev=19.59 00:55:08.402 lat (msec): min=24, max=143, avg=73.52, stdev=19.59 00:55:08.402 clat percentiles (msec): 00:55:08.402 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:55:08.402 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:55:08.402 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 109], 00:55:08.402 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 128], 00:55:08.402 | 99.99th=[ 144] 00:55:08.402 bw ( KiB/s): min= 664, max= 984, per=4.19%, avg=866.80, stdev=96.08, samples=20 00:55:08.402 iops : min= 166, max= 246, avg=216.70, stdev=24.02, samples=20 00:55:08.402 lat (msec) : 50=16.44%, 100=73.63%, 250=9.92% 00:55:08.402 cpu : usr=31.37%, sys=1.87%, ctx=909, majf=0, minf=9 00:55:08.402 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:55:08.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 issued rwts: total=2177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.402 filename1: (groupid=0, jobs=1): err= 0: pid=83430: Mon Dec 9 10:07:14 2024 00:55:08.402 read: IOPS=219, BW=879KiB/s (900kB/s)(8832KiB/10045msec) 00:55:08.402 slat (usec): min=5, max=4027, avg=22.32, stdev=170.66 00:55:08.402 clat (msec): min=7, max=144, avg=72.65, stdev=22.38 00:55:08.402 lat (msec): min=7, max=144, avg=72.67, stdev=22.38 00:55:08.402 clat percentiles (msec): 00:55:08.402 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:55:08.402 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:55:08.402 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 103], 95.00th=[ 113], 00:55:08.402 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 144], 00:55:08.402 | 99.99th=[ 144] 00:55:08.402 bw ( KiB/s): min= 560, max= 1389, per=4.24%, avg=876.65, stdev=161.45, samples=20 00:55:08.402 iops : min= 140, max= 347, avg=219.15, stdev=40.32, samples=20 00:55:08.402 lat (msec) : 10=0.82%, 20=1.99%, 50=11.10%, 100=75.36%, 250=10.73% 00:55:08.402 cpu : usr=45.70%, sys=2.12%, ctx=1066, majf=0, minf=9 00:55:08.402 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:55:08.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.402 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.402 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.402 filename1: (groupid=0, jobs=1): err= 0: pid=83431: Mon Dec 9 10:07:14 2024 00:55:08.402 read: IOPS=218, BW=875KiB/s (896kB/s)(8764KiB/10017msec) 00:55:08.402 slat (usec): min=8, max=8040, avg=18.30, stdev=171.54 00:55:08.402 clat (msec): min=21, max=121, avg=73.06, stdev=19.39 00:55:08.402 lat (msec): min=21, max=121, avg=73.07, stdev=19.39 00:55:08.402 clat percentiles (msec): 00:55:08.402 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 53], 00:55:08.403 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:55:08.403 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 108], 00:55:08.403 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:55:08.403 | 99.99th=[ 122] 00:55:08.403 bw ( KiB/s): min= 664, max= 968, per=4.22%, avg=872.00, stdev=88.36, samples=20 00:55:08.403 iops : min= 166, max= 242, avg=218.00, stdev=22.09, samples=20 00:55:08.403 lat (msec) : 50=15.43%, 100=75.17%, 250=9.40% 00:55:08.403 cpu : usr=31.23%, sys=1.97%, ctx=910, majf=0, minf=9 00:55:08.403 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:55:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.403 filename1: (groupid=0, jobs=1): err= 0: pid=83432: Mon Dec 9 10:07:14 2024 00:55:08.403 read: IOPS=215, BW=862KiB/s (883kB/s)(8652KiB/10034msec) 00:55:08.403 slat (usec): min=5, max=8032, avg=27.00, stdev=282.90 00:55:08.403 clat (msec): min=21, max=142, avg=74.06, stdev=20.03 00:55:08.403 lat (msec): min=21, max=142, avg=74.09, stdev=20.03 00:55:08.403 clat percentiles (msec): 00:55:08.403 | 1.00th=[ 26], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 56], 00:55:08.403 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:55:08.403 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 105], 95.00th=[ 111], 00:55:08.403 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:55:08.403 | 99.99th=[ 142] 00:55:08.403 bw ( KiB/s): min= 616, max= 1088, per=4.15%, avg=858.55, stdev=111.84, samples=20 00:55:08.403 iops : min= 154, max= 272, avg=214.60, stdev=28.02, samples=20 00:55:08.403 lat (msec) : 50=12.07%, 100=75.50%, 250=12.44% 00:55:08.403 cpu : usr=41.00%, sys=2.36%, ctx=1275, majf=0, minf=9 00:55:08.403 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=82.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:55:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.403 filename1: (groupid=0, jobs=1): err= 0: pid=83433: Mon Dec 9 10:07:14 2024 00:55:08.403 read: IOPS=221, BW=888KiB/s (909kB/s)(8892KiB/10016msec) 00:55:08.403 slat (usec): min=4, max=8026, avg=28.08, stdev=255.05 00:55:08.403 clat (msec): min=14, max=127, avg=71.94, stdev=19.44 00:55:08.403 lat (msec): min=18, max=127, avg=71.97, stdev=19.43 00:55:08.403 clat percentiles (msec): 00:55:08.403 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 53], 00:55:08.403 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:55:08.403 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 102], 95.00th=[ 108], 00:55:08.403 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 125], 00:55:08.403 | 99.99th=[ 128] 00:55:08.403 bw ( KiB/s): min= 672, max= 976, per=4.27%, avg=883.65, stdev=101.68, samples=20 00:55:08.403 iops : min= 168, max= 244, avg=220.90, stdev=25.43, samples=20 00:55:08.403 lat (msec) : 20=0.22%, 50=15.97%, 100=73.46%, 250=10.35% 00:55:08.403 cpu : usr=42.27%, sys=2.30%, ctx=1162, majf=0, minf=9 00:55:08.403 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:55:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.403 filename1: (groupid=0, jobs=1): err= 0: pid=83434: Mon Dec 9 10:07:14 2024 00:55:08.403 read: IOPS=212, BW=851KiB/s (872kB/s)(8520KiB/10009msec) 00:55:08.403 slat (usec): min=6, max=11022, avg=30.86, stdev=327.36 00:55:08.403 clat (msec): min=11, max=161, avg=75.04, stdev=23.09 00:55:08.403 lat (msec): min=11, max=161, avg=75.07, stdev=23.11 00:55:08.403 clat percentiles (msec): 00:55:08.403 | 1.00th=[ 19], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:55:08.403 | 30.00th=[ 58], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 79], 00:55:08.403 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 116], 00:55:08.403 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 163], 00:55:08.403 | 99.99th=[ 163] 00:55:08.403 bw ( KiB/s): min= 512, max= 1128, per=4.10%, avg=848.40, stdev=173.95, samples=20 00:55:08.403 iops : min= 128, max= 282, avg=212.10, stdev=43.49, samples=20 00:55:08.403 lat (msec) : 20=1.03%, 50=12.39%, 100=70.14%, 250=16.43% 00:55:08.403 cpu : usr=42.89%, sys=2.38%, ctx=1249, majf=0, minf=9 00:55:08.403 IO depths : 1=0.1%, 2=1.7%, 4=6.7%, 8=76.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:55:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 complete : 0=0.0%, 4=88.6%, 8=9.9%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 issued rwts: total=2130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.403 filename2: (groupid=0, jobs=1): err= 0: pid=83435: Mon Dec 9 10:07:14 2024 00:55:08.403 read: IOPS=213, BW=853KiB/s (874kB/s)(8568KiB/10040msec) 00:55:08.403 slat (usec): min=7, max=8024, avg=21.65, stdev=244.75 00:55:08.403 clat (msec): min=10, max=144, avg=74.85, stdev=21.38 00:55:08.403 lat (msec): min=10, max=144, avg=74.88, stdev=21.38 00:55:08.403 clat percentiles (msec): 00:55:08.403 | 1.00th=[ 14], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:55:08.403 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:55:08.403 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 109], 00:55:08.403 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:55:08.403 | 99.99th=[ 144] 00:55:08.403 bw ( KiB/s): min= 640, max= 1152, per=4.11%, avg=850.40, stdev=118.92, samples=20 00:55:08.403 iops : min= 160, max= 288, avg=212.60, stdev=29.73, samples=20 00:55:08.403 lat (msec) : 20=1.40%, 50=12.32%, 100=74.23%, 250=12.04% 00:55:08.403 cpu : usr=31.57%, sys=1.73%, ctx=926, majf=0, minf=9 00:55:08.403 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:55:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 issued rwts: total=2142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.403 filename2: (groupid=0, jobs=1): err= 0: pid=83436: Mon Dec 9 10:07:14 2024 00:55:08.403 read: IOPS=220, BW=882KiB/s (903kB/s)(8832KiB/10017msec) 00:55:08.403 slat (usec): min=4, max=8028, avg=22.19, stdev=208.90 00:55:08.403 clat (msec): min=16, max=144, avg=72.46, stdev=20.60 00:55:08.403 lat (msec): min=16, max=144, avg=72.48, stdev=20.60 00:55:08.403 clat percentiles (msec): 00:55:08.403 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 52], 00:55:08.403 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:55:08.403 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 112], 00:55:08.403 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:55:08.403 | 99.99th=[ 144] 00:55:08.403 bw ( KiB/s): min= 600, max= 1000, per=4.24%, avg=876.80, stdev=112.97, samples=20 00:55:08.403 iops : min= 150, max= 250, avg=219.20, stdev=28.24, samples=20 00:55:08.403 lat (msec) : 20=0.45%, 50=15.85%, 100=73.14%, 250=10.55% 00:55:08.403 cpu : usr=35.78%, sys=2.05%, ctx=1002, majf=0, minf=9 00:55:08.403 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:55:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.403 filename2: (groupid=0, jobs=1): err= 0: pid=83437: Mon Dec 9 10:07:14 2024 00:55:08.403 read: IOPS=203, BW=813KiB/s (833kB/s)(8148KiB/10018msec) 00:55:08.403 slat (usec): min=5, max=8026, avg=18.94, stdev=177.60 00:55:08.403 clat (msec): min=20, max=144, avg=78.53, stdev=22.46 00:55:08.403 lat (msec): min=20, max=144, avg=78.55, stdev=22.46 00:55:08.403 clat percentiles (msec): 00:55:08.403 | 1.00th=[ 24], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 56], 00:55:08.403 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 78], 60.00th=[ 82], 00:55:08.403 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 111], 95.00th=[ 117], 00:55:08.403 | 99.00th=[ 127], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:55:08.403 | 99.99th=[ 144] 00:55:08.403 bw ( KiB/s): min= 512, max= 1024, per=3.92%, avg=811.20, stdev=149.83, samples=20 00:55:08.403 iops : min= 128, max= 256, avg=202.80, stdev=37.46, samples=20 00:55:08.403 lat (msec) : 50=10.90%, 100=69.42%, 250=19.69% 00:55:08.403 cpu : usr=41.95%, sys=2.29%, ctx=1481, majf=0, minf=9 00:55:08.403 IO depths : 1=0.1%, 2=2.4%, 4=9.6%, 8=73.1%, 16=14.8%, 32=0.0%, >=64=0.0% 00:55:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 complete : 0=0.0%, 4=89.8%, 8=8.1%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 issued rwts: total=2037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.403 filename2: (groupid=0, jobs=1): err= 0: pid=83438: Mon Dec 9 10:07:14 2024 00:55:08.403 read: IOPS=212, BW=850KiB/s (870kB/s)(8508KiB/10009msec) 00:55:08.403 slat (usec): min=4, max=6035, avg=20.85, stdev=165.27 00:55:08.403 clat (msec): min=11, max=160, avg=75.17, stdev=23.00 00:55:08.403 lat (msec): min=11, max=160, avg=75.19, stdev=23.00 00:55:08.403 clat percentiles (msec): 00:55:08.403 | 1.00th=[ 25], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 54], 00:55:08.403 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 79], 00:55:08.403 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 110], 95.00th=[ 120], 00:55:08.403 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 153], 99.95th=[ 161], 00:55:08.403 | 99.99th=[ 161] 00:55:08.403 bw ( KiB/s): min= 512, max= 1104, per=4.10%, avg=847.20, stdev=163.19, samples=20 00:55:08.403 iops : min= 128, max= 276, avg=211.80, stdev=40.80, samples=20 00:55:08.403 lat (msec) : 20=0.71%, 50=12.32%, 100=72.26%, 250=14.72% 00:55:08.403 cpu : usr=37.26%, sys=2.36%, ctx=1233, majf=0, minf=9 00:55:08.403 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:55:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 complete : 0=0.0%, 4=88.1%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.403 issued rwts: total=2127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.403 filename2: (groupid=0, jobs=1): err= 0: pid=83439: Mon Dec 9 10:07:14 2024 00:55:08.403 read: IOPS=220, BW=880KiB/s (901kB/s)(8808KiB/10009msec) 00:55:08.403 slat (nsec): min=4697, max=39815, avg=14837.60, stdev=4576.37 00:55:08.404 clat (msec): min=8, max=155, avg=72.63, stdev=20.32 00:55:08.404 lat (msec): min=8, max=155, avg=72.65, stdev=20.32 00:55:08.404 clat percentiles (msec): 00:55:08.404 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 52], 00:55:08.404 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:55:08.404 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 109], 00:55:08.404 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 155], 00:55:08.404 | 99.99th=[ 155] 00:55:08.404 bw ( KiB/s): min= 568, max= 1080, per=4.24%, avg=877.25, stdev=130.59, samples=20 00:55:08.404 iops : min= 142, max= 270, avg=219.30, stdev=32.65, samples=20 00:55:08.404 lat (msec) : 10=0.32%, 20=0.68%, 50=17.03%, 100=70.80%, 250=11.17% 00:55:08.404 cpu : usr=31.40%, sys=1.76%, ctx=931, majf=0, minf=9 00:55:08.404 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:55:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.404 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.404 issued rwts: total=2202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.404 filename2: (groupid=0, jobs=1): err= 0: pid=83440: Mon Dec 9 10:07:14 2024 00:55:08.404 read: IOPS=213, BW=854KiB/s (875kB/s)(8544KiB/10001msec) 00:55:08.404 slat (usec): min=3, max=8067, avg=21.19, stdev=211.13 00:55:08.404 clat (usec): min=956, max=169116, avg=74821.51, stdev=26932.73 00:55:08.404 lat (usec): min=964, max=169130, avg=74842.70, stdev=26934.08 00:55:08.404 clat percentiles (usec): 00:55:08.404 | 1.00th=[ 1500], 5.00th=[ 35390], 10.00th=[ 47449], 20.00th=[ 52691], 00:55:08.404 | 30.00th=[ 63701], 40.00th=[ 71828], 50.00th=[ 76022], 60.00th=[ 79168], 00:55:08.404 | 70.00th=[ 84411], 80.00th=[ 96994], 90.00th=[109577], 95.00th=[117965], 00:55:08.404 | 99.00th=[147850], 99.50th=[149947], 99.90th=[158335], 99.95th=[168821], 00:55:08.404 | 99.99th=[168821] 00:55:08.404 bw ( KiB/s): min= 512, max= 1024, per=3.89%, avg=805.05, stdev=163.33, samples=19 00:55:08.404 iops : min= 128, max= 256, avg=201.26, stdev=40.83, samples=19 00:55:08.404 lat (usec) : 1000=0.28% 00:55:08.404 lat (msec) : 2=1.31%, 4=0.94%, 10=1.64%, 20=0.75%, 50=9.93% 00:55:08.404 lat (msec) : 100=66.76%, 250=18.40% 00:55:08.404 cpu : usr=38.74%, sys=2.25%, ctx=1374, majf=0, minf=9 00:55:08.404 IO depths : 1=0.1%, 2=2.5%, 4=10.1%, 8=73.0%, 16=14.4%, 32=0.0%, >=64=0.0% 00:55:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.404 complete : 0=0.0%, 4=89.8%, 8=8.0%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.404 issued rwts: total=2136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.404 filename2: (groupid=0, jobs=1): err= 0: pid=83441: Mon Dec 9 10:07:14 2024 00:55:08.404 read: IOPS=220, BW=883KiB/s (904kB/s)(8836KiB/10005msec) 00:55:08.404 slat (usec): min=4, max=8029, avg=22.05, stdev=241.09 00:55:08.404 clat (msec): min=5, max=168, avg=72.36, stdev=24.60 00:55:08.404 lat (msec): min=5, max=168, avg=72.38, stdev=24.59 00:55:08.404 clat percentiles (msec): 00:55:08.404 | 1.00th=[ 8], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:55:08.404 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 74], 00:55:08.404 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 115], 00:55:08.404 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 169], 00:55:08.404 | 99.99th=[ 169] 00:55:08.404 bw ( KiB/s): min= 472, max= 976, per=4.15%, avg=857.68, stdev=165.20, samples=19 00:55:08.404 iops : min= 118, max= 244, avg=214.42, stdev=41.30, samples=19 00:55:08.404 lat (msec) : 10=1.45%, 20=0.86%, 50=18.56%, 100=66.18%, 250=12.95% 00:55:08.404 cpu : usr=31.54%, sys=1.72%, ctx=955, majf=0, minf=10 00:55:08.404 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:55:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.404 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.404 issued rwts: total=2209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.404 filename2: (groupid=0, jobs=1): err= 0: pid=83442: Mon Dec 9 10:07:14 2024 00:55:08.404 read: IOPS=222, BW=888KiB/s (910kB/s)(8896KiB/10014msec) 00:55:08.404 slat (usec): min=4, max=8039, avg=23.98, stdev=255.15 00:55:08.404 clat (msec): min=22, max=122, avg=71.91, stdev=19.29 00:55:08.404 lat (msec): min=22, max=123, avg=71.93, stdev=19.29 00:55:08.404 clat percentiles (msec): 00:55:08.404 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:55:08.404 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:55:08.404 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 109], 00:55:08.404 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:55:08.404 | 99.99th=[ 124] 00:55:08.404 bw ( KiB/s): min= 664, max= 1000, per=4.29%, avg=886.00, stdev=103.13, samples=20 00:55:08.404 iops : min= 166, max= 250, avg=221.50, stdev=25.78, samples=20 00:55:08.404 lat (msec) : 50=17.94%, 100=72.48%, 250=9.58% 00:55:08.404 cpu : usr=33.70%, sys=1.77%, ctx=1139, majf=0, minf=9 00:55:08.404 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:55:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.404 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:08.404 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:55:08.404 00:55:08.404 Run status group 0 (all jobs): 00:55:08.404 READ: bw=20.2MiB/s (21.2MB/s), 813KiB/s-895KiB/s (833kB/s-916kB/s), io=203MiB (213MB), run=10001-10056msec 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.404 bdev_null0 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.404 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.405 [2024-12-09 10:07:14.458716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.405 bdev_null1 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:08.405 { 00:55:08.405 "params": { 00:55:08.405 "name": "Nvme$subsystem", 00:55:08.405 "trtype": "$TEST_TRANSPORT", 00:55:08.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:08.405 "adrfam": "ipv4", 00:55:08.405 "trsvcid": "$NVMF_PORT", 00:55:08.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:08.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:08.405 "hdgst": ${hdgst:-false}, 00:55:08.405 "ddgst": ${ddgst:-false} 00:55:08.405 }, 00:55:08.405 "method": "bdev_nvme_attach_controller" 00:55:08.405 } 00:55:08.405 EOF 00:55:08.405 )") 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:08.405 { 00:55:08.405 "params": { 00:55:08.405 "name": "Nvme$subsystem", 00:55:08.405 "trtype": "$TEST_TRANSPORT", 00:55:08.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:08.405 "adrfam": "ipv4", 00:55:08.405 "trsvcid": "$NVMF_PORT", 00:55:08.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:08.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:08.405 "hdgst": ${hdgst:-false}, 00:55:08.405 "ddgst": ${ddgst:-false} 00:55:08.405 }, 00:55:08.405 "method": "bdev_nvme_attach_controller" 00:55:08.405 } 00:55:08.405 EOF 00:55:08.405 )") 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:08.405 "params": { 00:55:08.405 "name": "Nvme0", 00:55:08.405 "trtype": "tcp", 00:55:08.405 "traddr": "10.0.0.3", 00:55:08.405 "adrfam": "ipv4", 00:55:08.405 "trsvcid": "4420", 00:55:08.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:08.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:08.405 "hdgst": false, 00:55:08.405 "ddgst": false 00:55:08.405 }, 00:55:08.405 "method": "bdev_nvme_attach_controller" 00:55:08.405 },{ 00:55:08.405 "params": { 00:55:08.405 "name": "Nvme1", 00:55:08.405 "trtype": "tcp", 00:55:08.405 "traddr": "10.0.0.3", 00:55:08.405 "adrfam": "ipv4", 00:55:08.405 "trsvcid": "4420", 00:55:08.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:08.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:08.405 "hdgst": false, 00:55:08.405 "ddgst": false 00:55:08.405 }, 00:55:08.405 "method": "bdev_nvme_attach_controller" 00:55:08.405 }' 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:55:08.405 10:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:55:08.405 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:55:08.405 ... 00:55:08.405 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:55:08.405 ... 00:55:08.405 fio-3.35 00:55:08.405 Starting 4 threads 00:55:13.675 00:55:13.675 filename0: (groupid=0, jobs=1): err= 0: pid=83578: Mon Dec 9 10:07:20 2024 00:55:13.675 read: IOPS=1877, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5001msec) 00:55:13.675 slat (nsec): min=3906, max=53479, avg=15049.27, stdev=3555.99 00:55:13.675 clat (usec): min=1239, max=10775, avg=4205.84, stdev=680.17 00:55:13.675 lat (usec): min=1253, max=10785, avg=4220.89, stdev=680.01 00:55:13.675 clat percentiles (usec): 00:55:13.675 | 1.00th=[ 2278], 5.00th=[ 3097], 10.00th=[ 3916], 20.00th=[ 4015], 00:55:13.675 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:55:13.675 | 70.00th=[ 4293], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5145], 00:55:13.675 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 7963], 99.95th=[ 9634], 00:55:13.675 | 99.99th=[10814] 00:55:13.675 bw ( KiB/s): min=14108, max=17056, per=24.64%, avg=15181.89, stdev=877.64, samples=9 00:55:13.675 iops : min= 1763, max= 2132, avg=1897.67, stdev=109.78, samples=9 00:55:13.675 lat (msec) : 2=0.27%, 4=20.08%, 10=79.63%, 20=0.03% 00:55:13.675 cpu : usr=91.84%, sys=7.34%, ctx=11, majf=0, minf=0 00:55:13.675 IO depths : 1=0.1%, 2=20.2%, 4=53.4%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:13.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:13.675 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:13.675 issued rwts: total=9389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:13.675 latency : target=0, window=0, percentile=100.00%, depth=8 00:55:13.675 filename0: (groupid=0, jobs=1): err= 0: pid=83579: Mon Dec 9 10:07:20 2024 00:55:13.675 read: IOPS=1950, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5001msec) 00:55:13.675 slat (nsec): min=4766, max=46433, avg=13629.52, stdev=3501.73 00:55:13.675 clat (usec): min=1014, max=10829, avg=4054.43, stdev=725.91 00:55:13.675 lat (usec): min=1023, max=10836, avg=4068.06, stdev=725.91 00:55:13.675 clat percentiles (usec): 00:55:13.675 | 1.00th=[ 2008], 5.00th=[ 2376], 10.00th=[ 3130], 20.00th=[ 3982], 00:55:13.675 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:55:13.675 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 4883], 00:55:13.675 | 99.00th=[ 6521], 99.50th=[ 7046], 99.90th=[ 7504], 99.95th=[ 9241], 00:55:13.675 | 99.99th=[10814] 00:55:13.675 bw ( KiB/s): min=13920, max=16689, per=25.29%, avg=15582.33, stdev=948.56, samples=9 00:55:13.675 iops : min= 1740, max= 2086, avg=1947.78, stdev=118.55, samples=9 00:55:13.675 lat (msec) : 2=0.99%, 4=24.61%, 10=74.37%, 20=0.03% 00:55:13.675 cpu : usr=91.94%, sys=7.20%, ctx=41, majf=0, minf=0 00:55:13.675 IO depths : 1=0.1%, 2=17.5%, 4=55.1%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:13.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:13.675 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:13.675 issued rwts: total=9754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:13.675 latency : target=0, window=0, percentile=100.00%, depth=8 00:55:13.676 filename1: (groupid=0, jobs=1): err= 0: pid=83580: Mon Dec 9 10:07:20 2024 00:55:13.676 read: IOPS=1996, BW=15.6MiB/s (16.4MB/s)(78.0MiB/5001msec) 00:55:13.676 slat (usec): min=3, max=212, avg=14.11, stdev= 6.29 00:55:13.676 clat (usec): min=899, max=10783, avg=3957.33, stdev=814.42 00:55:13.676 lat (usec): min=908, max=10800, avg=3971.44, stdev=814.53 00:55:13.676 clat percentiles (usec): 00:55:13.676 | 1.00th=[ 1450], 5.00th=[ 2311], 10.00th=[ 2638], 20.00th=[ 3818], 00:55:13.676 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:55:13.676 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 4817], 00:55:13.676 | 99.00th=[ 6325], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 9241], 00:55:13.676 | 99.99th=[10814] 00:55:13.676 bw ( KiB/s): min=13920, max=17744, per=25.98%, avg=16005.33, stdev=1042.64, samples=9 00:55:13.676 iops : min= 1740, max= 2218, avg=2000.67, stdev=130.33, samples=9 00:55:13.676 lat (usec) : 1000=0.03% 00:55:13.676 lat (msec) : 2=3.13%, 4=27.54%, 10=69.27%, 20=0.03% 00:55:13.676 cpu : usr=91.40%, sys=7.36%, ctx=72, majf=0, minf=0 00:55:13.676 IO depths : 1=0.1%, 2=15.5%, 4=56.2%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:13.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:13.676 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:13.676 issued rwts: total=9986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:13.676 latency : target=0, window=0, percentile=100.00%, depth=8 00:55:13.676 filename1: (groupid=0, jobs=1): err= 0: pid=83581: Mon Dec 9 10:07:20 2024 00:55:13.676 read: IOPS=1877, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5002msec) 00:55:13.676 slat (nsec): min=3857, max=48007, avg=14464.45, stdev=3296.01 00:55:13.676 clat (usec): min=1235, max=10771, avg=4209.03, stdev=680.59 00:55:13.676 lat (usec): min=1249, max=10780, avg=4223.50, stdev=680.73 00:55:13.676 clat percentiles (usec): 00:55:13.676 | 1.00th=[ 2278], 5.00th=[ 3130], 10.00th=[ 3916], 20.00th=[ 4015], 00:55:13.676 | 30.00th=[ 4047], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:55:13.676 | 70.00th=[ 4293], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5145], 00:55:13.676 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 7963], 99.95th=[ 9503], 00:55:13.676 | 99.99th=[10814] 00:55:13.676 bw ( KiB/s): min=14080, max=17056, per=24.64%, avg=15178.78, stdev=881.96, samples=9 00:55:13.676 iops : min= 1760, max= 2132, avg=1897.33, stdev=110.24, samples=9 00:55:13.676 lat (msec) : 2=0.27%, 4=19.36%, 10=80.34%, 20=0.03% 00:55:13.676 cpu : usr=92.34%, sys=6.80%, ctx=67, majf=0, minf=0 00:55:13.676 IO depths : 1=0.1%, 2=20.2%, 4=53.4%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:13.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:13.676 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:13.676 issued rwts: total=9389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:13.676 latency : target=0, window=0, percentile=100.00%, depth=8 00:55:13.676 00:55:13.676 Run status group 0 (all jobs): 00:55:13.676 READ: bw=60.2MiB/s (63.1MB/s), 14.7MiB/s-15.6MiB/s (15.4MB/s-16.4MB/s), io=301MiB (316MB), run=5001-5002msec 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 ************************************ 00:55:13.676 END TEST fio_dif_rand_params 00:55:13.676 ************************************ 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:13.676 00:55:13.676 real 0m23.442s 00:55:13.676 user 2m3.345s 00:55:13.676 sys 0m8.332s 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 10:07:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:55:13.676 10:07:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:55:13.676 10:07:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 ************************************ 00:55:13.676 START TEST fio_dif_digest 00:55:13.676 ************************************ 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 bdev_null0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:13.676 [2024-12-09 10:07:20.640566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:13.676 { 00:55:13.676 "params": { 00:55:13.676 "name": "Nvme$subsystem", 00:55:13.676 "trtype": "$TEST_TRANSPORT", 00:55:13.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:13.676 "adrfam": "ipv4", 00:55:13.676 "trsvcid": "$NVMF_PORT", 00:55:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:13.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:13.676 "hdgst": ${hdgst:-false}, 00:55:13.676 "ddgst": ${ddgst:-false} 00:55:13.676 }, 00:55:13.676 "method": "bdev_nvme_attach_controller" 00:55:13.676 } 00:55:13.676 EOF 00:55:13.676 )") 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:55:13.676 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:13.677 "params": { 00:55:13.677 "name": "Nvme0", 00:55:13.677 "trtype": "tcp", 00:55:13.677 "traddr": "10.0.0.3", 00:55:13.677 "adrfam": "ipv4", 00:55:13.677 "trsvcid": "4420", 00:55:13.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:13.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:13.677 "hdgst": true, 00:55:13.677 "ddgst": true 00:55:13.677 }, 00:55:13.677 "method": "bdev_nvme_attach_controller" 00:55:13.677 }' 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:55:13.677 10:07:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:55:13.677 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:55:13.677 ... 00:55:13.677 fio-3.35 00:55:13.677 Starting 3 threads 00:55:25.880 00:55:25.880 filename0: (groupid=0, jobs=1): err= 0: pid=83686: Mon Dec 9 10:07:31 2024 00:55:25.880 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(266MiB/10010msec) 00:55:25.880 slat (nsec): min=7674, max=86419, avg=11346.17, stdev=4886.59 00:55:25.880 clat (usec): min=9508, max=16665, avg=14109.14, stdev=295.82 00:55:25.880 lat (usec): min=9517, max=16691, avg=14120.49, stdev=296.05 00:55:25.880 clat percentiles (usec): 00:55:25.880 | 1.00th=[13566], 5.00th=[13829], 10.00th=[13829], 20.00th=[13960], 00:55:25.880 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14091], 60.00th=[14091], 00:55:25.880 | 70.00th=[14222], 80.00th=[14222], 90.00th=[14353], 95.00th=[14484], 00:55:25.880 | 99.00th=[14746], 99.50th=[14877], 99.90th=[16581], 99.95th=[16712], 00:55:25.880 | 99.99th=[16712] 00:55:25.880 bw ( KiB/s): min=26880, max=27648, per=33.34%, avg=27156.75, stdev=370.30, samples=20 00:55:25.880 iops : min= 210, max= 216, avg=212.10, stdev= 2.94, samples=20 00:55:25.880 lat (msec) : 10=0.14%, 20=99.86% 00:55:25.880 cpu : usr=91.53%, sys=7.87%, ctx=97, majf=0, minf=0 00:55:25.880 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:25.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:25.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:25.880 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:25.880 latency : target=0, window=0, percentile=100.00%, depth=3 00:55:25.880 filename0: (groupid=0, jobs=1): err= 0: pid=83687: Mon Dec 9 10:07:31 2024 00:55:25.880 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(266MiB/10013msec) 00:55:25.880 slat (nsec): min=6660, max=50272, avg=11777.93, stdev=5348.62 00:55:25.880 clat (usec): min=13389, max=15319, avg=14111.82, stdev=223.90 00:55:25.880 lat (usec): min=13397, max=15347, avg=14123.60, stdev=224.47 00:55:25.880 clat percentiles (usec): 00:55:25.880 | 1.00th=[13698], 5.00th=[13829], 10.00th=[13829], 20.00th=[13960], 00:55:25.880 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14091], 60.00th=[14091], 00:55:25.880 | 70.00th=[14222], 80.00th=[14222], 90.00th=[14353], 95.00th=[14484], 00:55:25.880 | 99.00th=[14877], 99.50th=[14877], 99.90th=[15270], 99.95th=[15270], 00:55:25.880 | 99.99th=[15270] 00:55:25.880 bw ( KiB/s): min=26880, max=27648, per=33.33%, avg=27148.80, stdev=375.83, samples=20 00:55:25.880 iops : min= 210, max= 216, avg=212.10, stdev= 2.94, samples=20 00:55:25.880 lat (msec) : 20=100.00% 00:55:25.880 cpu : usr=91.17%, sys=8.23%, ctx=20, majf=0, minf=0 00:55:25.880 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:25.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:25.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:25.880 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:25.880 latency : target=0, window=0, percentile=100.00%, depth=3 00:55:25.880 filename0: (groupid=0, jobs=1): err= 0: pid=83688: Mon Dec 9 10:07:31 2024 00:55:25.880 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(266MiB/10010msec) 00:55:25.880 slat (nsec): min=7360, max=56242, avg=11058.73, stdev=4332.95 00:55:25.880 clat (usec): min=11276, max=15130, avg=14110.01, stdev=246.27 00:55:25.880 lat (usec): min=11285, max=15158, avg=14121.07, stdev=246.56 00:55:25.880 clat percentiles (usec): 00:55:25.880 | 1.00th=[13566], 5.00th=[13829], 10.00th=[13829], 20.00th=[13960], 00:55:25.880 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14091], 60.00th=[14091], 00:55:25.880 | 70.00th=[14222], 80.00th=[14222], 90.00th=[14353], 95.00th=[14615], 00:55:25.880 | 99.00th=[14877], 99.50th=[14877], 99.90th=[15139], 99.95th=[15139], 00:55:25.880 | 99.99th=[15139] 00:55:25.880 bw ( KiB/s): min=26880, max=27703, per=33.34%, avg=27156.95, stdev=382.00, samples=20 00:55:25.880 iops : min= 210, max= 216, avg=212.10, stdev= 2.94, samples=20 00:55:25.880 lat (msec) : 20=100.00% 00:55:25.880 cpu : usr=91.02%, sys=8.34%, ctx=29, majf=0, minf=0 00:55:25.880 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:25.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:25.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:25.880 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:25.880 latency : target=0, window=0, percentile=100.00%, depth=3 00:55:25.880 00:55:25.880 Run status group 0 (all jobs): 00:55:25.880 READ: bw=79.5MiB/s (83.4MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=797MiB (835MB), run=10010-10013msec 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:25.880 ************************************ 00:55:25.880 END TEST fio_dif_digest 00:55:25.880 ************************************ 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:25.880 00:55:25.880 real 0m11.044s 00:55:25.880 user 0m28.093s 00:55:25.880 sys 0m2.683s 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:25.880 10:07:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:25.880 10:07:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:55:25.880 10:07:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:25.880 rmmod nvme_tcp 00:55:25.880 rmmod nvme_fabrics 00:55:25.880 rmmod nvme_keyring 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82946 ']' 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82946 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82946 ']' 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82946 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82946 00:55:25.880 killing process with pid 82946 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82946' 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82946 00:55:25.880 10:07:31 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82946 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:55:25.880 10:07:31 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:55:25.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:25.880 Waiting for block devices as requested 00:55:25.880 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:55:25.880 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:25.880 10:07:32 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:25.881 10:07:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:55:25.881 10:07:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:25.881 10:07:32 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:55:25.881 00:55:25.881 real 0m59.358s 00:55:25.881 user 3m47.136s 00:55:25.881 sys 0m19.412s 00:55:25.881 10:07:32 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:25.881 10:07:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:55:25.881 ************************************ 00:55:25.881 END TEST nvmf_dif 00:55:25.881 ************************************ 00:55:25.881 10:07:32 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:55:25.881 10:07:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:55:25.881 10:07:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:25.881 10:07:32 -- common/autotest_common.sh@10 -- # set +x 00:55:25.881 ************************************ 00:55:25.881 START TEST nvmf_abort_qd_sizes 00:55:25.881 ************************************ 00:55:25.881 10:07:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:55:25.881 * Looking for test storage... 00:55:25.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:55:25.881 10:07:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:25.881 10:07:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:25.881 10:07:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:25.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:25.881 --rc genhtml_branch_coverage=1 00:55:25.881 --rc genhtml_function_coverage=1 00:55:25.881 --rc genhtml_legend=1 00:55:25.881 --rc geninfo_all_blocks=1 00:55:25.881 --rc geninfo_unexecuted_blocks=1 00:55:25.881 00:55:25.881 ' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:25.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:25.881 --rc genhtml_branch_coverage=1 00:55:25.881 --rc genhtml_function_coverage=1 00:55:25.881 --rc genhtml_legend=1 00:55:25.881 --rc geninfo_all_blocks=1 00:55:25.881 --rc geninfo_unexecuted_blocks=1 00:55:25.881 00:55:25.881 ' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:25.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:25.881 --rc genhtml_branch_coverage=1 00:55:25.881 --rc genhtml_function_coverage=1 00:55:25.881 --rc genhtml_legend=1 00:55:25.881 --rc geninfo_all_blocks=1 00:55:25.881 --rc geninfo_unexecuted_blocks=1 00:55:25.881 00:55:25.881 ' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:25.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:25.881 --rc genhtml_branch_coverage=1 00:55:25.881 --rc genhtml_function_coverage=1 00:55:25.881 --rc genhtml_legend=1 00:55:25.881 --rc geninfo_all_blocks=1 00:55:25.881 --rc geninfo_unexecuted_blocks=1 00:55:25.881 00:55:25.881 ' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:25.881 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:25.881 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:55:25.882 Cannot find device "nvmf_init_br" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:55:25.882 Cannot find device "nvmf_init_br2" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:55:25.882 Cannot find device "nvmf_tgt_br" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:55:25.882 Cannot find device "nvmf_tgt_br2" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:55:25.882 Cannot find device "nvmf_init_br" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:55:25.882 Cannot find device "nvmf_init_br2" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:55:25.882 Cannot find device "nvmf_tgt_br" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:55:25.882 Cannot find device "nvmf_tgt_br2" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:55:25.882 Cannot find device "nvmf_br" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:55:25.882 Cannot find device "nvmf_init_if" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:55:25.882 Cannot find device "nvmf_init_if2" 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:25.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:25.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:55:25.882 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:55:26.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:26.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:55:26.141 00:55:26.141 --- 10.0.0.3 ping statistics --- 00:55:26.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:26.141 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:55:26.141 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:55:26.141 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:55:26.141 00:55:26.141 --- 10.0.0.4 ping statistics --- 00:55:26.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:26.141 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:26.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:26.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:55:26.141 00:55:26.141 --- 10.0.0.1 ping statistics --- 00:55:26.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:26.141 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:55:26.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:26.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:55:26.141 00:55:26.141 --- 10.0.0.2 ping statistics --- 00:55:26.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:26.141 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:55:26.141 10:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:55:26.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:26.708 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:55:26.967 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:26.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84330 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84330 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84330 ']' 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:26.967 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:26.967 [2024-12-09 10:07:34.421629] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:55:26.967 [2024-12-09 10:07:34.421718] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:27.225 [2024-12-09 10:07:34.577228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:55:27.225 [2024-12-09 10:07:34.619149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:27.225 [2024-12-09 10:07:34.619473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:27.225 [2024-12-09 10:07:34.619713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:27.225 [2024-12-09 10:07:34.619856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:27.225 [2024-12-09 10:07:34.619992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:27.225 [2024-12-09 10:07:34.621030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:27.225 [2024-12-09 10:07:34.621102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:55:27.225 [2024-12-09 10:07:34.621176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:55:27.225 [2024-12-09 10:07:34.621178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:27.225 [2024-12-09 10:07:34.657040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:55:27.225 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:27.225 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:55:27.225 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:27.226 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:27.226 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:27.485 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:27.485 ************************************ 00:55:27.485 START TEST spdk_target_abort 00:55:27.485 ************************************ 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:27.485 spdk_targetn1 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:27.485 [2024-12-09 10:07:34.880571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:27.485 [2024-12-09 10:07:34.921494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:55:27.485 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:27.486 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:55:27.486 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:27.486 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:55:27.486 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:27.486 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:27.486 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:27.486 10:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:31.675 Initializing NVMe Controllers 00:55:31.675 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:55:31.675 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:31.675 Initialization complete. Launching workers. 00:55:31.675 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10178, failed: 0 00:55:31.675 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1056, failed to submit 9122 00:55:31.675 success 827, unsuccessful 229, failed 0 00:55:31.675 10:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:31.675 10:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:35.015 Initializing NVMe Controllers 00:55:35.015 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:55:35.015 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:35.015 Initialization complete. Launching workers. 00:55:35.015 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8976, failed: 0 00:55:35.015 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1168, failed to submit 7808 00:55:35.015 success 422, unsuccessful 746, failed 0 00:55:35.015 10:07:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:35.015 10:07:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:38.300 Initializing NVMe Controllers 00:55:38.300 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:55:38.300 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:38.300 Initialization complete. Launching workers. 00:55:38.300 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31490, failed: 0 00:55:38.300 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2242, failed to submit 29248 00:55:38.300 success 445, unsuccessful 1797, failed 0 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84330 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84330 ']' 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84330 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84330 00:55:38.300 killing process with pid 84330 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84330' 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84330 00:55:38.300 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84330 00:55:38.558 00:55:38.558 real 0m11.111s 00:55:38.558 user 0m42.108s 00:55:38.558 sys 0m2.141s 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:38.558 ************************************ 00:55:38.558 END TEST spdk_target_abort 00:55:38.558 ************************************ 00:55:38.558 10:07:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:55:38.558 10:07:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:55:38.558 10:07:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:38.558 10:07:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:38.558 ************************************ 00:55:38.558 START TEST kernel_target_abort 00:55:38.558 ************************************ 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:38.558 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:55:38.559 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:55:38.559 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:55:38.559 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:55:38.559 10:07:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:55:38.559 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:55:38.559 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:55:38.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:39.074 Waiting for block devices as requested 00:55:39.074 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:55:39.074 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:55:39.074 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:55:39.074 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:55:39.074 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:55:39.074 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:55:39.074 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:55:39.074 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:55:39.074 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:55:39.074 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:55:39.074 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:55:39.332 No valid GPT data, bailing 00:55:39.332 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:55:39.332 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:55:39.332 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:55:39.332 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:55:39.332 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:55:39.332 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:55:39.332 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:55:39.332 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:55:39.332 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:55:39.333 No valid GPT data, bailing 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:55:39.333 No valid GPT data, bailing 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:55:39.333 No valid GPT data, bailing 00:55:39.333 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 --hostid=eacce601-ab17-4447-87df-663b0f4f5455 -a 10.0.0.1 -t tcp -s 4420 00:55:39.592 00:55:39.592 Discovery Log Number of Records 2, Generation counter 2 00:55:39.592 =====Discovery Log Entry 0====== 00:55:39.592 trtype: tcp 00:55:39.592 adrfam: ipv4 00:55:39.592 subtype: current discovery subsystem 00:55:39.592 treq: not specified, sq flow control disable supported 00:55:39.592 portid: 1 00:55:39.592 trsvcid: 4420 00:55:39.592 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:55:39.592 traddr: 10.0.0.1 00:55:39.592 eflags: none 00:55:39.592 sectype: none 00:55:39.592 =====Discovery Log Entry 1====== 00:55:39.592 trtype: tcp 00:55:39.592 adrfam: ipv4 00:55:39.592 subtype: nvme subsystem 00:55:39.592 treq: not specified, sq flow control disable supported 00:55:39.592 portid: 1 00:55:39.592 trsvcid: 4420 00:55:39.592 subnqn: nqn.2016-06.io.spdk:testnqn 00:55:39.592 traddr: 10.0.0.1 00:55:39.592 eflags: none 00:55:39.592 sectype: none 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:39.592 10:07:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:42.879 Initializing NVMe Controllers 00:55:42.879 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:55:42.879 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:42.879 Initialization complete. Launching workers. 00:55:42.879 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33078, failed: 0 00:55:42.879 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33078, failed to submit 0 00:55:42.879 success 0, unsuccessful 33078, failed 0 00:55:42.879 10:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:42.879 10:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:46.165 Initializing NVMe Controllers 00:55:46.165 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:55:46.165 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:46.165 Initialization complete. Launching workers. 00:55:46.165 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65067, failed: 0 00:55:46.165 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28144, failed to submit 36923 00:55:46.165 success 0, unsuccessful 28144, failed 0 00:55:46.165 10:07:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:46.165 10:07:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:49.451 Initializing NVMe Controllers 00:55:49.451 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:55:49.451 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:49.451 Initialization complete. Launching workers. 00:55:49.451 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75840, failed: 0 00:55:49.451 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18964, failed to submit 56876 00:55:49.451 success 0, unsuccessful 18964, failed 0 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:55:49.451 10:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:55:50.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:51.392 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:55:51.651 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:55:51.651 00:55:51.651 real 0m12.989s 00:55:51.651 user 0m6.650s 00:55:51.651 sys 0m3.851s 00:55:51.651 10:07:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:51.651 10:07:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:51.651 ************************************ 00:55:51.651 END TEST kernel_target_abort 00:55:51.651 ************************************ 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:51.651 rmmod nvme_tcp 00:55:51.651 rmmod nvme_fabrics 00:55:51.651 rmmod nvme_keyring 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84330 ']' 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84330 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84330 ']' 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84330 00:55:51.651 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84330) - No such process 00:55:51.651 Process with pid 84330 is not found 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84330 is not found' 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:55:51.651 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:55:52.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:52.218 Waiting for block devices as requested 00:55:52.218 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:55:52.218 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:52.218 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:55:52.477 00:55:52.477 real 0m27.058s 00:55:52.477 user 0m49.904s 00:55:52.477 sys 0m7.384s 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:52.477 10:07:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:52.477 ************************************ 00:55:52.477 END TEST nvmf_abort_qd_sizes 00:55:52.477 ************************************ 00:55:52.477 10:07:59 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:55:52.477 10:07:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:55:52.477 10:07:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:52.477 10:07:59 -- common/autotest_common.sh@10 -- # set +x 00:55:52.477 ************************************ 00:55:52.477 START TEST keyring_file 00:55:52.477 ************************************ 00:55:52.477 10:07:59 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:55:52.736 * Looking for test storage... 00:55:52.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:55:52.736 10:08:00 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:52.736 10:08:00 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:55:52.736 10:08:00 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:52.736 10:08:00 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@345 -- # : 1 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@353 -- # local d=1 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@355 -- # echo 1 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@353 -- # local d=2 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@355 -- # echo 2 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@368 -- # return 0 00:55:52.736 10:08:00 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:52.736 10:08:00 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:52.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:52.736 --rc genhtml_branch_coverage=1 00:55:52.736 --rc genhtml_function_coverage=1 00:55:52.736 --rc genhtml_legend=1 00:55:52.736 --rc geninfo_all_blocks=1 00:55:52.736 --rc geninfo_unexecuted_blocks=1 00:55:52.736 00:55:52.736 ' 00:55:52.736 10:08:00 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:52.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:52.736 --rc genhtml_branch_coverage=1 00:55:52.736 --rc genhtml_function_coverage=1 00:55:52.736 --rc genhtml_legend=1 00:55:52.736 --rc geninfo_all_blocks=1 00:55:52.736 --rc geninfo_unexecuted_blocks=1 00:55:52.736 00:55:52.736 ' 00:55:52.736 10:08:00 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:52.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:52.736 --rc genhtml_branch_coverage=1 00:55:52.736 --rc genhtml_function_coverage=1 00:55:52.736 --rc genhtml_legend=1 00:55:52.736 --rc geninfo_all_blocks=1 00:55:52.736 --rc geninfo_unexecuted_blocks=1 00:55:52.736 00:55:52.736 ' 00:55:52.736 10:08:00 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:52.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:52.736 --rc genhtml_branch_coverage=1 00:55:52.736 --rc genhtml_function_coverage=1 00:55:52.736 --rc genhtml_legend=1 00:55:52.736 --rc geninfo_all_blocks=1 00:55:52.736 --rc geninfo_unexecuted_blocks=1 00:55:52.736 00:55:52.736 ' 00:55:52.736 10:08:00 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:55:52.736 10:08:00 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:52.736 10:08:00 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:52.736 10:08:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:52.736 10:08:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:52.736 10:08:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:52.736 10:08:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:55:52.736 10:08:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@51 -- # : 0 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:52.736 10:08:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:52.737 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:52.737 10:08:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:55:52.737 10:08:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:55:52.737 10:08:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:55:52.737 10:08:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:55:52.737 10:08:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:55:52.737 10:08:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:55:52.737 10:08:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:55:52.737 10:08:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:55:52.737 10:08:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:55:52.737 10:08:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:55:52.737 10:08:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:55:52.737 10:08:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:55:52.737 10:08:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CQI4GUenJh 00:55:52.737 10:08:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:55:52.737 10:08:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CQI4GUenJh 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CQI4GUenJh 00:55:52.995 10:08:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.CQI4GUenJh 00:55:52.995 10:08:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ccg9LYMxXn 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:55:52.995 10:08:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:55:52.995 10:08:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:55:52.995 10:08:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:55:52.995 10:08:00 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:55:52.995 10:08:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:55:52.995 10:08:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ccg9LYMxXn 00:55:52.995 10:08:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ccg9LYMxXn 00:55:52.995 10:08:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Ccg9LYMxXn 00:55:52.995 10:08:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=85243 00:55:52.995 10:08:00 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:52.995 10:08:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85243 00:55:52.995 10:08:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85243 ']' 00:55:52.995 10:08:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:52.995 10:08:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:52.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:52.995 10:08:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:52.995 10:08:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:52.995 10:08:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:52.995 [2024-12-09 10:08:00.367872] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:55:52.995 [2024-12-09 10:08:00.368033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85243 ] 00:55:53.254 [2024-12-09 10:08:00.524935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:53.254 [2024-12-09 10:08:00.565033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:53.254 [2024-12-09 10:08:00.611468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:55:53.254 10:08:00 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:53.254 10:08:00 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:55:53.254 10:08:00 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:55:53.254 10:08:00 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:53.254 10:08:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:53.254 [2024-12-09 10:08:00.751542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:53.513 null0 00:55:53.513 [2024-12-09 10:08:00.783503] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:55:53.513 [2024-12-09 10:08:00.783704] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:53.513 10:08:00 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:53.513 [2024-12-09 10:08:00.811485] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:55:53.513 request: 00:55:53.513 { 00:55:53.513 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:55:53.513 "secure_channel": false, 00:55:53.513 "listen_address": { 00:55:53.513 "trtype": "tcp", 00:55:53.513 "traddr": "127.0.0.1", 00:55:53.513 "trsvcid": "4420" 00:55:53.513 }, 00:55:53.513 "method": "nvmf_subsystem_add_listener", 00:55:53.513 "req_id": 1 00:55:53.513 } 00:55:53.513 Got JSON-RPC error response 00:55:53.513 response: 00:55:53.513 { 00:55:53.513 "code": -32602, 00:55:53.513 "message": "Invalid parameters" 00:55:53.513 } 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:55:53.513 10:08:00 keyring_file -- keyring/file.sh@47 -- # bperfpid=85247 00:55:53.513 10:08:00 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:55:53.513 10:08:00 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85247 /var/tmp/bperf.sock 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85247 ']' 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:53.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:53.513 10:08:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:53.513 [2024-12-09 10:08:00.891929] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:55:53.513 [2024-12-09 10:08:00.892121] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85247 ] 00:55:53.771 [2024-12-09 10:08:01.053845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:53.771 [2024-12-09 10:08:01.094806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:53.771 [2024-12-09 10:08:01.128594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:55:53.771 10:08:01 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:53.771 10:08:01 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:55:53.771 10:08:01 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CQI4GUenJh 00:55:53.771 10:08:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CQI4GUenJh 00:55:54.029 10:08:01 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ccg9LYMxXn 00:55:54.029 10:08:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ccg9LYMxXn 00:55:54.288 10:08:01 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:55:54.288 10:08:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:55:54.288 10:08:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:54.288 10:08:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:54.288 10:08:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:54.546 10:08:02 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.CQI4GUenJh == \/\t\m\p\/\t\m\p\.\C\Q\I\4\G\U\e\n\J\h ]] 00:55:54.546 10:08:02 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:55:54.546 10:08:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:54.546 10:08:02 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:55:54.546 10:08:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:54.546 10:08:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:54.805 10:08:02 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Ccg9LYMxXn == \/\t\m\p\/\t\m\p\.\C\c\g\9\L\Y\M\x\X\n ]] 00:55:54.805 10:08:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:55:54.805 10:08:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:54.805 10:08:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:54.805 10:08:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:54.805 10:08:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:54.805 10:08:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:55.064 10:08:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:55:55.064 10:08:02 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:55:55.064 10:08:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:55:55.064 10:08:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:55.064 10:08:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:55.064 10:08:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:55.064 10:08:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:55.630 10:08:02 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:55:55.630 10:08:02 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:55.630 10:08:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:55.630 [2024-12-09 10:08:03.103666] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:55:55.888 nvme0n1 00:55:55.888 10:08:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:55:55.888 10:08:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:55.888 10:08:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:55.888 10:08:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:55.888 10:08:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:55.888 10:08:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:56.146 10:08:03 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:55:56.146 10:08:03 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:55:56.146 10:08:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:55:56.146 10:08:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:56.146 10:08:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:56.146 10:08:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:56.146 10:08:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:56.404 10:08:03 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:55:56.404 10:08:03 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:55:56.404 Running I/O for 1 seconds... 00:55:57.800 11505.00 IOPS, 44.94 MiB/s 00:55:57.800 Latency(us) 00:55:57.800 [2024-12-09T10:08:05.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:57.800 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:55:57.800 nvme0n1 : 1.01 11557.16 45.15 0.00 0.00 11045.58 3902.37 16920.20 00:55:57.800 [2024-12-09T10:08:05.302Z] =================================================================================================================== 00:55:57.800 [2024-12-09T10:08:05.302Z] Total : 11557.16 45.15 0.00 0.00 11045.58 3902.37 16920.20 00:55:57.800 { 00:55:57.800 "results": [ 00:55:57.800 { 00:55:57.800 "job": "nvme0n1", 00:55:57.800 "core_mask": "0x2", 00:55:57.800 "workload": "randrw", 00:55:57.800 "percentage": 50, 00:55:57.800 "status": "finished", 00:55:57.800 "queue_depth": 128, 00:55:57.800 "io_size": 4096, 00:55:57.800 "runtime": 1.006735, 00:55:57.800 "iops": 11557.162510491837, 00:55:57.800 "mibps": 45.14516605660874, 00:55:57.800 "io_failed": 0, 00:55:57.800 "io_timeout": 0, 00:55:57.800 "avg_latency_us": 11045.583684338008, 00:55:57.800 "min_latency_us": 3902.370909090909, 00:55:57.800 "max_latency_us": 16920.203636363636 00:55:57.800 } 00:55:57.800 ], 00:55:57.800 "core_count": 1 00:55:57.800 } 00:55:57.800 10:08:04 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:55:57.800 10:08:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:55:57.800 10:08:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:55:57.800 10:08:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:57.800 10:08:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:57.800 10:08:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:57.800 10:08:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:57.800 10:08:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:58.057 10:08:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:55:58.057 10:08:05 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:55:58.057 10:08:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:55:58.057 10:08:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:58.057 10:08:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:58.057 10:08:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:58.057 10:08:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:58.315 10:08:05 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:55:58.315 10:08:05 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:55:58.315 10:08:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:55:58.315 10:08:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:55:58.315 10:08:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:55:58.315 10:08:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:58.315 10:08:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:55:58.315 10:08:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:58.315 10:08:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:55:58.315 10:08:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:55:58.573 [2024-12-09 10:08:06.048561] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:55:58.573 [2024-12-09 10:08:06.049182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeea5d0 (107): Transport endpoint is not connected 00:55:58.573 [2024-12-09 10:08:06.050170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeea5d0 (9): Bad file descriptor 00:55:58.573 [2024-12-09 10:08:06.051167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:55:58.573 [2024-12-09 10:08:06.051191] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:55:58.573 [2024-12-09 10:08:06.051202] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:55:58.573 [2024-12-09 10:08:06.051214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:55:58.573 request: 00:55:58.573 { 00:55:58.573 "name": "nvme0", 00:55:58.573 "trtype": "tcp", 00:55:58.573 "traddr": "127.0.0.1", 00:55:58.573 "adrfam": "ipv4", 00:55:58.573 "trsvcid": "4420", 00:55:58.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:58.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:58.573 "prchk_reftag": false, 00:55:58.573 "prchk_guard": false, 00:55:58.573 "hdgst": false, 00:55:58.573 "ddgst": false, 00:55:58.573 "psk": "key1", 00:55:58.573 "allow_unrecognized_csi": false, 00:55:58.573 "method": "bdev_nvme_attach_controller", 00:55:58.573 "req_id": 1 00:55:58.573 } 00:55:58.573 Got JSON-RPC error response 00:55:58.573 response: 00:55:58.573 { 00:55:58.573 "code": -5, 00:55:58.573 "message": "Input/output error" 00:55:58.573 } 00:55:58.573 10:08:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:55:58.573 10:08:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:55:58.573 10:08:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:55:58.573 10:08:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:55:58.831 10:08:06 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:55:58.831 10:08:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:58.831 10:08:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:58.831 10:08:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:58.831 10:08:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:58.831 10:08:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:59.089 10:08:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:55:59.089 10:08:06 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:55:59.089 10:08:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:55:59.089 10:08:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:59.089 10:08:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:59.089 10:08:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:59.089 10:08:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:59.347 10:08:06 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:55:59.347 10:08:06 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:55:59.347 10:08:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:55:59.605 10:08:06 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:55:59.605 10:08:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:55:59.863 10:08:07 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:55:59.863 10:08:07 keyring_file -- keyring/file.sh@78 -- # jq length 00:55:59.863 10:08:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:00.121 10:08:07 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:56:00.121 10:08:07 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.CQI4GUenJh 00:56:00.379 10:08:07 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.CQI4GUenJh 00:56:00.379 10:08:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:56:00.379 10:08:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.CQI4GUenJh 00:56:00.379 10:08:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:56:00.379 10:08:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:00.379 10:08:07 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:56:00.379 10:08:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:00.379 10:08:07 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CQI4GUenJh 00:56:00.379 10:08:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CQI4GUenJh 00:56:00.637 [2024-12-09 10:08:07.917622] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CQI4GUenJh': 0100660 00:56:00.637 [2024-12-09 10:08:07.917689] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:56:00.637 request: 00:56:00.637 { 00:56:00.637 "name": "key0", 00:56:00.637 "path": "/tmp/tmp.CQI4GUenJh", 00:56:00.637 "method": "keyring_file_add_key", 00:56:00.637 "req_id": 1 00:56:00.637 } 00:56:00.637 Got JSON-RPC error response 00:56:00.637 response: 00:56:00.637 { 00:56:00.637 "code": -1, 00:56:00.637 "message": "Operation not permitted" 00:56:00.637 } 00:56:00.637 10:08:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:56:00.637 10:08:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:00.637 10:08:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:00.637 10:08:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:00.637 10:08:07 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.CQI4GUenJh 00:56:00.637 10:08:07 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CQI4GUenJh 00:56:00.637 10:08:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CQI4GUenJh 00:56:00.895 10:08:08 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.CQI4GUenJh 00:56:00.895 10:08:08 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:56:00.895 10:08:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:56:00.895 10:08:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:56:00.895 10:08:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:56:00.895 10:08:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:00.895 10:08:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:56:01.154 10:08:08 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:56:01.154 10:08:08 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:56:01.154 10:08:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:56:01.154 10:08:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:56:01.154 10:08:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:56:01.154 10:08:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:01.154 10:08:08 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:56:01.154 10:08:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:01.154 10:08:08 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:56:01.154 10:08:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:56:01.412 [2024-12-09 10:08:08.797810] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.CQI4GUenJh': No such file or directory 00:56:01.412 [2024-12-09 10:08:08.797857] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:56:01.412 [2024-12-09 10:08:08.797896] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:56:01.412 [2024-12-09 10:08:08.797905] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:56:01.412 [2024-12-09 10:08:08.797915] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:56:01.412 [2024-12-09 10:08:08.797924] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:56:01.412 request: 00:56:01.412 { 00:56:01.412 "name": "nvme0", 00:56:01.412 "trtype": "tcp", 00:56:01.412 "traddr": "127.0.0.1", 00:56:01.412 "adrfam": "ipv4", 00:56:01.412 "trsvcid": "4420", 00:56:01.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:56:01.412 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:56:01.412 "prchk_reftag": false, 00:56:01.412 "prchk_guard": false, 00:56:01.412 "hdgst": false, 00:56:01.412 "ddgst": false, 00:56:01.412 "psk": "key0", 00:56:01.412 "allow_unrecognized_csi": false, 00:56:01.412 "method": "bdev_nvme_attach_controller", 00:56:01.412 "req_id": 1 00:56:01.412 } 00:56:01.412 Got JSON-RPC error response 00:56:01.412 response: 00:56:01.412 { 00:56:01.412 "code": -19, 00:56:01.412 "message": "No such device" 00:56:01.412 } 00:56:01.412 10:08:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:56:01.413 10:08:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:01.413 10:08:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:01.413 10:08:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:01.413 10:08:08 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:56:01.413 10:08:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:56:01.671 10:08:09 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4wqqliwo4d 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:56:01.671 10:08:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:56:01.671 10:08:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:56:01.671 10:08:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:56:01.671 10:08:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:56:01.671 10:08:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:56:01.671 10:08:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4wqqliwo4d 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4wqqliwo4d 00:56:01.671 10:08:09 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.4wqqliwo4d 00:56:01.671 10:08:09 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4wqqliwo4d 00:56:01.671 10:08:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4wqqliwo4d 00:56:01.929 10:08:09 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:56:01.929 10:08:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:56:02.496 nvme0n1 00:56:02.496 10:08:09 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:56:02.496 10:08:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:56:02.496 10:08:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:56:02.496 10:08:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:56:02.496 10:08:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:56:02.496 10:08:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:02.754 10:08:10 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:56:02.754 10:08:10 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:56:02.754 10:08:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:56:03.013 10:08:10 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:56:03.013 10:08:10 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:56:03.013 10:08:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:56:03.013 10:08:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:56:03.013 10:08:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:03.271 10:08:10 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:56:03.271 10:08:10 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:56:03.271 10:08:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:56:03.271 10:08:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:56:03.271 10:08:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:56:03.271 10:08:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:56:03.271 10:08:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:03.530 10:08:10 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:56:03.530 10:08:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:56:03.530 10:08:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:56:03.788 10:08:11 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:56:03.788 10:08:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:03.788 10:08:11 keyring_file -- keyring/file.sh@105 -- # jq length 00:56:04.047 10:08:11 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:56:04.047 10:08:11 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4wqqliwo4d 00:56:04.047 10:08:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4wqqliwo4d 00:56:04.305 10:08:11 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ccg9LYMxXn 00:56:04.305 10:08:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ccg9LYMxXn 00:56:04.564 10:08:12 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:56:04.564 10:08:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:56:05.130 nvme0n1 00:56:05.130 10:08:12 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:56:05.131 10:08:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:56:05.390 10:08:12 keyring_file -- keyring/file.sh@113 -- # config='{ 00:56:05.390 "subsystems": [ 00:56:05.390 { 00:56:05.390 "subsystem": "keyring", 00:56:05.390 "config": [ 00:56:05.390 { 00:56:05.390 "method": "keyring_file_add_key", 00:56:05.390 "params": { 00:56:05.390 "name": "key0", 00:56:05.390 "path": "/tmp/tmp.4wqqliwo4d" 00:56:05.390 } 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "method": "keyring_file_add_key", 00:56:05.390 "params": { 00:56:05.390 "name": "key1", 00:56:05.390 "path": "/tmp/tmp.Ccg9LYMxXn" 00:56:05.390 } 00:56:05.390 } 00:56:05.390 ] 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "subsystem": "iobuf", 00:56:05.390 "config": [ 00:56:05.390 { 00:56:05.390 "method": "iobuf_set_options", 00:56:05.390 "params": { 00:56:05.390 "small_pool_count": 8192, 00:56:05.390 "large_pool_count": 1024, 00:56:05.390 "small_bufsize": 8192, 00:56:05.390 "large_bufsize": 135168, 00:56:05.390 "enable_numa": false 00:56:05.390 } 00:56:05.390 } 00:56:05.390 ] 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "subsystem": "sock", 00:56:05.390 "config": [ 00:56:05.390 { 00:56:05.390 "method": "sock_set_default_impl", 00:56:05.390 "params": { 00:56:05.390 "impl_name": "uring" 00:56:05.390 } 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "method": "sock_impl_set_options", 00:56:05.390 "params": { 00:56:05.390 "impl_name": "ssl", 00:56:05.390 "recv_buf_size": 4096, 00:56:05.390 "send_buf_size": 4096, 00:56:05.390 "enable_recv_pipe": true, 00:56:05.390 "enable_quickack": false, 00:56:05.390 "enable_placement_id": 0, 00:56:05.390 "enable_zerocopy_send_server": true, 00:56:05.390 "enable_zerocopy_send_client": false, 00:56:05.390 "zerocopy_threshold": 0, 00:56:05.390 "tls_version": 0, 00:56:05.390 "enable_ktls": false 00:56:05.390 } 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "method": "sock_impl_set_options", 00:56:05.390 "params": { 00:56:05.390 "impl_name": "posix", 00:56:05.390 "recv_buf_size": 2097152, 00:56:05.390 "send_buf_size": 2097152, 00:56:05.390 "enable_recv_pipe": true, 00:56:05.390 "enable_quickack": false, 00:56:05.390 "enable_placement_id": 0, 00:56:05.390 "enable_zerocopy_send_server": true, 00:56:05.390 "enable_zerocopy_send_client": false, 00:56:05.390 "zerocopy_threshold": 0, 00:56:05.390 "tls_version": 0, 00:56:05.390 "enable_ktls": false 00:56:05.390 } 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "method": "sock_impl_set_options", 00:56:05.390 "params": { 00:56:05.390 "impl_name": "uring", 00:56:05.390 "recv_buf_size": 2097152, 00:56:05.390 "send_buf_size": 2097152, 00:56:05.390 "enable_recv_pipe": true, 00:56:05.390 "enable_quickack": false, 00:56:05.390 "enable_placement_id": 0, 00:56:05.390 "enable_zerocopy_send_server": false, 00:56:05.390 "enable_zerocopy_send_client": false, 00:56:05.390 "zerocopy_threshold": 0, 00:56:05.390 "tls_version": 0, 00:56:05.390 "enable_ktls": false 00:56:05.390 } 00:56:05.390 } 00:56:05.390 ] 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "subsystem": "vmd", 00:56:05.390 "config": [] 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "subsystem": "accel", 00:56:05.390 "config": [ 00:56:05.390 { 00:56:05.390 "method": "accel_set_options", 00:56:05.390 "params": { 00:56:05.390 "small_cache_size": 128, 00:56:05.390 "large_cache_size": 16, 00:56:05.390 "task_count": 2048, 00:56:05.390 "sequence_count": 2048, 00:56:05.390 "buf_count": 2048 00:56:05.390 } 00:56:05.390 } 00:56:05.390 ] 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "subsystem": "bdev", 00:56:05.390 "config": [ 00:56:05.390 { 00:56:05.390 "method": "bdev_set_options", 00:56:05.390 "params": { 00:56:05.390 "bdev_io_pool_size": 65535, 00:56:05.390 "bdev_io_cache_size": 256, 00:56:05.390 "bdev_auto_examine": true, 00:56:05.390 "iobuf_small_cache_size": 128, 00:56:05.390 "iobuf_large_cache_size": 16 00:56:05.390 } 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "method": "bdev_raid_set_options", 00:56:05.390 "params": { 00:56:05.390 "process_window_size_kb": 1024, 00:56:05.390 "process_max_bandwidth_mb_sec": 0 00:56:05.390 } 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "method": "bdev_iscsi_set_options", 00:56:05.390 "params": { 00:56:05.390 "timeout_sec": 30 00:56:05.390 } 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "method": "bdev_nvme_set_options", 00:56:05.390 "params": { 00:56:05.390 "action_on_timeout": "none", 00:56:05.390 "timeout_us": 0, 00:56:05.390 "timeout_admin_us": 0, 00:56:05.390 "keep_alive_timeout_ms": 10000, 00:56:05.390 "arbitration_burst": 0, 00:56:05.390 "low_priority_weight": 0, 00:56:05.390 "medium_priority_weight": 0, 00:56:05.390 "high_priority_weight": 0, 00:56:05.390 "nvme_adminq_poll_period_us": 10000, 00:56:05.390 "nvme_ioq_poll_period_us": 0, 00:56:05.390 "io_queue_requests": 512, 00:56:05.390 "delay_cmd_submit": true, 00:56:05.390 "transport_retry_count": 4, 00:56:05.390 "bdev_retry_count": 3, 00:56:05.390 "transport_ack_timeout": 0, 00:56:05.390 "ctrlr_loss_timeout_sec": 0, 00:56:05.390 "reconnect_delay_sec": 0, 00:56:05.390 "fast_io_fail_timeout_sec": 0, 00:56:05.390 "disable_auto_failback": false, 00:56:05.390 "generate_uuids": false, 00:56:05.390 "transport_tos": 0, 00:56:05.390 "nvme_error_stat": false, 00:56:05.390 "rdma_srq_size": 0, 00:56:05.390 "io_path_stat": false, 00:56:05.390 "allow_accel_sequence": false, 00:56:05.390 "rdma_max_cq_size": 0, 00:56:05.390 "rdma_cm_event_timeout_ms": 0, 00:56:05.390 "dhchap_digests": [ 00:56:05.390 "sha256", 00:56:05.390 "sha384", 00:56:05.390 "sha512" 00:56:05.390 ], 00:56:05.390 "dhchap_dhgroups": [ 00:56:05.390 "null", 00:56:05.390 "ffdhe2048", 00:56:05.390 "ffdhe3072", 00:56:05.390 "ffdhe4096", 00:56:05.390 "ffdhe6144", 00:56:05.390 "ffdhe8192" 00:56:05.390 ] 00:56:05.390 } 00:56:05.390 }, 00:56:05.390 { 00:56:05.390 "method": "bdev_nvme_attach_controller", 00:56:05.390 "params": { 00:56:05.390 "name": "nvme0", 00:56:05.391 "trtype": "TCP", 00:56:05.391 "adrfam": "IPv4", 00:56:05.391 "traddr": "127.0.0.1", 00:56:05.391 "trsvcid": "4420", 00:56:05.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:56:05.391 "prchk_reftag": false, 00:56:05.391 "prchk_guard": false, 00:56:05.391 "ctrlr_loss_timeout_sec": 0, 00:56:05.391 "reconnect_delay_sec": 0, 00:56:05.391 "fast_io_fail_timeout_sec": 0, 00:56:05.391 "psk": "key0", 00:56:05.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:56:05.391 "hdgst": false, 00:56:05.391 "ddgst": false, 00:56:05.391 "multipath": "multipath" 00:56:05.391 } 00:56:05.391 }, 00:56:05.391 { 00:56:05.391 "method": "bdev_nvme_set_hotplug", 00:56:05.391 "params": { 00:56:05.391 "period_us": 100000, 00:56:05.391 "enable": false 00:56:05.391 } 00:56:05.391 }, 00:56:05.391 { 00:56:05.391 "method": "bdev_wait_for_examine" 00:56:05.391 } 00:56:05.391 ] 00:56:05.391 }, 00:56:05.391 { 00:56:05.391 "subsystem": "nbd", 00:56:05.391 "config": [] 00:56:05.391 } 00:56:05.391 ] 00:56:05.391 }' 00:56:05.391 10:08:12 keyring_file -- keyring/file.sh@115 -- # killprocess 85247 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85247 ']' 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85247 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@959 -- # uname 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85247 00:56:05.391 killing process with pid 85247 00:56:05.391 Received shutdown signal, test time was about 1.000000 seconds 00:56:05.391 00:56:05.391 Latency(us) 00:56:05.391 [2024-12-09T10:08:12.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:05.391 [2024-12-09T10:08:12.893Z] =================================================================================================================== 00:56:05.391 [2024-12-09T10:08:12.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85247' 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@973 -- # kill 85247 00:56:05.391 10:08:12 keyring_file -- common/autotest_common.sh@978 -- # wait 85247 00:56:05.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:56:05.651 10:08:12 keyring_file -- keyring/file.sh@118 -- # bperfpid=85502 00:56:05.651 10:08:12 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85502 /var/tmp/bperf.sock 00:56:05.651 10:08:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85502 ']' 00:56:05.651 10:08:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:56:05.651 10:08:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:05.651 10:08:12 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:56:05.651 10:08:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:56:05.651 10:08:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:05.651 10:08:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:56:05.651 10:08:12 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:56:05.651 "subsystems": [ 00:56:05.651 { 00:56:05.651 "subsystem": "keyring", 00:56:05.651 "config": [ 00:56:05.651 { 00:56:05.651 "method": "keyring_file_add_key", 00:56:05.651 "params": { 00:56:05.651 "name": "key0", 00:56:05.651 "path": "/tmp/tmp.4wqqliwo4d" 00:56:05.651 } 00:56:05.651 }, 00:56:05.651 { 00:56:05.651 "method": "keyring_file_add_key", 00:56:05.651 "params": { 00:56:05.651 "name": "key1", 00:56:05.651 "path": "/tmp/tmp.Ccg9LYMxXn" 00:56:05.651 } 00:56:05.651 } 00:56:05.651 ] 00:56:05.651 }, 00:56:05.651 { 00:56:05.651 "subsystem": "iobuf", 00:56:05.651 "config": [ 00:56:05.651 { 00:56:05.651 "method": "iobuf_set_options", 00:56:05.651 "params": { 00:56:05.651 "small_pool_count": 8192, 00:56:05.651 "large_pool_count": 1024, 00:56:05.651 "small_bufsize": 8192, 00:56:05.651 "large_bufsize": 135168, 00:56:05.651 "enable_numa": false 00:56:05.651 } 00:56:05.651 } 00:56:05.651 ] 00:56:05.651 }, 00:56:05.651 { 00:56:05.651 "subsystem": "sock", 00:56:05.651 "config": [ 00:56:05.651 { 00:56:05.651 "method": "sock_set_default_impl", 00:56:05.651 "params": { 00:56:05.651 "impl_name": "uring" 00:56:05.651 } 00:56:05.651 }, 00:56:05.651 { 00:56:05.651 "method": "sock_impl_set_options", 00:56:05.651 "params": { 00:56:05.651 "impl_name": "ssl", 00:56:05.651 "recv_buf_size": 4096, 00:56:05.651 "send_buf_size": 4096, 00:56:05.651 "enable_recv_pipe": true, 00:56:05.651 "enable_quickack": false, 00:56:05.651 "enable_placement_id": 0, 00:56:05.651 "enable_zerocopy_send_server": true, 00:56:05.651 "enable_zerocopy_send_client": false, 00:56:05.651 "zerocopy_threshold": 0, 00:56:05.651 "tls_version": 0, 00:56:05.651 "enable_ktls": false 00:56:05.651 } 00:56:05.651 }, 00:56:05.651 { 00:56:05.651 "method": "sock_impl_set_options", 00:56:05.651 "params": { 00:56:05.651 "impl_name": "posix", 00:56:05.651 "recv_buf_size": 2097152, 00:56:05.651 "send_buf_size": 2097152, 00:56:05.651 "enable_recv_pipe": true, 00:56:05.651 "enable_quickack": false, 00:56:05.651 "enable_placement_id": 0, 00:56:05.651 "enable_zerocopy_send_server": true, 00:56:05.651 "enable_zerocopy_send_client": false, 00:56:05.651 "zerocopy_threshold": 0, 00:56:05.651 "tls_version": 0, 00:56:05.651 "enable_ktls": false 00:56:05.651 } 00:56:05.651 }, 00:56:05.651 { 00:56:05.651 "method": "sock_impl_set_options", 00:56:05.651 "params": { 00:56:05.651 "impl_name": "uring", 00:56:05.651 "recv_buf_size": 2097152, 00:56:05.651 "send_buf_size": 2097152, 00:56:05.651 "enable_recv_pipe": true, 00:56:05.651 "enable_quickack": false, 00:56:05.651 "enable_placement_id": 0, 00:56:05.651 "enable_zerocopy_send_server": false, 00:56:05.651 "enable_zerocopy_send_client": false, 00:56:05.651 "zerocopy_threshold": 0, 00:56:05.651 "tls_version": 0, 00:56:05.651 "enable_ktls": false 00:56:05.651 } 00:56:05.651 } 00:56:05.651 ] 00:56:05.651 }, 00:56:05.652 { 00:56:05.652 "subsystem": "vmd", 00:56:05.652 "config": [] 00:56:05.652 }, 00:56:05.652 { 00:56:05.652 "subsystem": "accel", 00:56:05.652 "config": [ 00:56:05.652 { 00:56:05.652 "method": "accel_set_options", 00:56:05.652 "params": { 00:56:05.652 "small_cache_size": 128, 00:56:05.652 "large_cache_size": 16, 00:56:05.652 "task_count": 2048, 00:56:05.652 "sequence_count": 2048, 00:56:05.652 "buf_count": 2048 00:56:05.652 } 00:56:05.652 } 00:56:05.652 ] 00:56:05.652 }, 00:56:05.652 { 00:56:05.652 "subsystem": "bdev", 00:56:05.652 "config": [ 00:56:05.652 { 00:56:05.652 "method": "bdev_set_options", 00:56:05.652 "params": { 00:56:05.652 "bdev_io_pool_size": 65535, 00:56:05.652 "bdev_io_cache_size": 256, 00:56:05.652 "bdev_auto_examine": true, 00:56:05.652 "iobuf_small_cache_size": 128, 00:56:05.652 "iobuf_large_cache_size": 16 00:56:05.652 } 00:56:05.652 }, 00:56:05.652 { 00:56:05.652 "method": "bdev_raid_set_options", 00:56:05.652 "params": { 00:56:05.652 "process_window_size_kb": 1024, 00:56:05.652 "process_max_bandwidth_mb_sec": 0 00:56:05.652 } 00:56:05.652 }, 00:56:05.652 { 00:56:05.652 "method": "bdev_iscsi_set_options", 00:56:05.652 "params": { 00:56:05.652 "timeout_sec": 30 00:56:05.652 } 00:56:05.652 }, 00:56:05.652 { 00:56:05.652 "method": "bdev_nvme_set_options", 00:56:05.652 "params": { 00:56:05.652 "action_on_timeout": "none", 00:56:05.652 "timeout_us": 0, 00:56:05.652 "timeout_admin_us": 0, 00:56:05.652 "keep_alive_timeout_ms": 10000, 00:56:05.652 "arbitration_burst": 0, 00:56:05.652 "low_priority_weight": 0, 00:56:05.652 "medium_priority_weight": 0, 00:56:05.652 "high_priority_weight": 0, 00:56:05.652 "nvme_adminq_poll_period_us": 10000, 00:56:05.652 "nvme_ioq_poll_period_us": 0, 00:56:05.652 "io_queue_requests": 512, 00:56:05.652 "delay_cmd_submit": true, 00:56:05.652 "transport_retry_count": 4, 00:56:05.652 "bdev_retry_count": 3, 00:56:05.652 "transport_ack_timeout": 0, 00:56:05.652 "ctrlr_loss_timeout_sec": 0, 00:56:05.652 "reconnect_delay_sec": 0, 00:56:05.652 "fast_io_fail_timeout_sec": 0, 00:56:05.652 "disable_auto_failback": false, 00:56:05.652 "generate_uuids": false, 00:56:05.652 "transport_tos": 0, 00:56:05.652 "nvme_error_stat": false, 00:56:05.652 "rdma_srq_size": 0, 00:56:05.652 "io_path_stat": false, 00:56:05.652 "allow_accel_sequence": false, 00:56:05.652 "rdma_max_cq_size": 0, 00:56:05.652 "rdma_cm_event_timeout_ms": 0, 00:56:05.652 "dhchap_digests": [ 00:56:05.652 "sha256", 00:56:05.652 "sha384", 00:56:05.652 "sha512" 00:56:05.652 ], 00:56:05.652 "dhchap_dhgroups": [ 00:56:05.652 "null", 00:56:05.652 "ffdhe2048", 00:56:05.652 "ffdhe3072", 00:56:05.652 "ffdhe4096", 00:56:05.652 "ffdhe6144", 00:56:05.652 "ffdhe8192" 00:56:05.652 ] 00:56:05.652 } 00:56:05.652 }, 00:56:05.652 { 00:56:05.652 "method": "bdev_nvme_attach_controller", 00:56:05.652 "params": { 00:56:05.652 "name": "nvme0", 00:56:05.652 "trtype": "TCP", 00:56:05.652 "adrfam": "IPv4", 00:56:05.652 "traddr": "127.0.0.1", 00:56:05.652 "trsvcid": "4420", 00:56:05.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:56:05.652 "prchk_reftag": false, 00:56:05.652 "prchk_guard": false, 00:56:05.652 "ctrlr_loss_timeout_sec": 0, 00:56:05.652 "reconnect_delay_sec": 0, 00:56:05.652 "fast_io_fail_timeout_sec": 0, 00:56:05.652 "psk": "key0", 00:56:05.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:56:05.652 "hdgst": false, 00:56:05.652 "ddgst": false, 00:56:05.652 "multipath": "multipath" 00:56:05.652 } 00:56:05.652 }, 00:56:05.652 { 00:56:05.652 "method": "bdev_nvme_set_hotplug", 00:56:05.652 "params": { 00:56:05.652 "period_us": 100000, 00:56:05.652 "enable": false 00:56:05.652 } 00:56:05.652 }, 00:56:05.652 { 00:56:05.652 "method": "bdev_wait_for_examine" 00:56:05.652 } 00:56:05.652 ] 00:56:05.652 }, 00:56:05.652 { 00:56:05.652 "subsystem": "nbd", 00:56:05.652 "config": [] 00:56:05.652 } 00:56:05.652 ] 00:56:05.652 }' 00:56:05.652 [2024-12-09 10:08:12.999575] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:56:05.652 [2024-12-09 10:08:13.000188] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85502 ] 00:56:05.652 [2024-12-09 10:08:13.151189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:05.910 [2024-12-09 10:08:13.183664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:05.910 [2024-12-09 10:08:13.296307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:56:05.910 [2024-12-09 10:08:13.337664] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:56:06.844 10:08:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:06.844 10:08:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:56:06.844 10:08:14 keyring_file -- keyring/file.sh@121 -- # jq length 00:56:06.844 10:08:14 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:56:06.844 10:08:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:06.844 10:08:14 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:56:06.844 10:08:14 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:56:06.844 10:08:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:56:06.844 10:08:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:56:06.844 10:08:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:56:06.844 10:08:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:06.844 10:08:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:56:07.410 10:08:14 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:56:07.410 10:08:14 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:56:07.410 10:08:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:56:07.410 10:08:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:56:07.410 10:08:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:56:07.410 10:08:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:56:07.410 10:08:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:07.410 10:08:14 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:56:07.410 10:08:14 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:56:07.410 10:08:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:56:07.410 10:08:14 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:56:07.976 10:08:15 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:56:07.976 10:08:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:56:07.976 10:08:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4wqqliwo4d /tmp/tmp.Ccg9LYMxXn 00:56:07.976 10:08:15 keyring_file -- keyring/file.sh@20 -- # killprocess 85502 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85502 ']' 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85502 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85502 00:56:07.976 killing process with pid 85502 00:56:07.976 Received shutdown signal, test time was about 1.000000 seconds 00:56:07.976 00:56:07.976 Latency(us) 00:56:07.976 [2024-12-09T10:08:15.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:07.976 [2024-12-09T10:08:15.478Z] =================================================================================================================== 00:56:07.976 [2024-12-09T10:08:15.478Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85502' 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@973 -- # kill 85502 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@978 -- # wait 85502 00:56:07.976 10:08:15 keyring_file -- keyring/file.sh@21 -- # killprocess 85243 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85243 ']' 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85243 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85243 00:56:07.976 killing process with pid 85243 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85243' 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@973 -- # kill 85243 00:56:07.976 10:08:15 keyring_file -- common/autotest_common.sh@978 -- # wait 85243 00:56:08.558 ************************************ 00:56:08.558 END TEST keyring_file 00:56:08.558 ************************************ 00:56:08.558 00:56:08.558 real 0m15.799s 00:56:08.558 user 0m41.081s 00:56:08.558 sys 0m2.762s 00:56:08.558 10:08:15 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:08.558 10:08:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:56:08.558 10:08:15 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:56:08.558 10:08:15 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:56:08.558 10:08:15 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:08.558 10:08:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:08.558 10:08:15 -- common/autotest_common.sh@10 -- # set +x 00:56:08.558 ************************************ 00:56:08.558 START TEST keyring_linux 00:56:08.558 ************************************ 00:56:08.558 10:08:15 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:56:08.558 Joined session keyring: 390467506 00:56:08.558 * Looking for test storage... 00:56:08.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:56:08.558 10:08:15 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:08.558 10:08:15 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:56:08.558 10:08:15 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:08.558 10:08:16 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@345 -- # : 1 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:08.558 10:08:16 keyring_linux -- scripts/common.sh@368 -- # return 0 00:56:08.558 10:08:16 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:08.558 10:08:16 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:08.558 --rc genhtml_branch_coverage=1 00:56:08.558 --rc genhtml_function_coverage=1 00:56:08.558 --rc genhtml_legend=1 00:56:08.558 --rc geninfo_all_blocks=1 00:56:08.558 --rc geninfo_unexecuted_blocks=1 00:56:08.558 00:56:08.558 ' 00:56:08.558 10:08:16 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:08.558 --rc genhtml_branch_coverage=1 00:56:08.558 --rc genhtml_function_coverage=1 00:56:08.558 --rc genhtml_legend=1 00:56:08.558 --rc geninfo_all_blocks=1 00:56:08.558 --rc geninfo_unexecuted_blocks=1 00:56:08.558 00:56:08.558 ' 00:56:08.558 10:08:16 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:08.558 --rc genhtml_branch_coverage=1 00:56:08.558 --rc genhtml_function_coverage=1 00:56:08.558 --rc genhtml_legend=1 00:56:08.558 --rc geninfo_all_blocks=1 00:56:08.558 --rc geninfo_unexecuted_blocks=1 00:56:08.558 00:56:08.559 ' 00:56:08.559 10:08:16 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:08.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:08.559 --rc genhtml_branch_coverage=1 00:56:08.559 --rc genhtml_function_coverage=1 00:56:08.559 --rc genhtml_legend=1 00:56:08.559 --rc geninfo_all_blocks=1 00:56:08.559 --rc geninfo_unexecuted_blocks=1 00:56:08.559 00:56:08.559 ' 00:56:08.559 10:08:16 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:56:08.559 10:08:16 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:08.559 10:08:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:08.823 10:08:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eacce601-ab17-4447-87df-663b0f4f5455 00:56:08.823 10:08:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=eacce601-ab17-4447-87df-663b0f4f5455 00:56:08.823 10:08:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:08.823 10:08:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:08.823 10:08:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:08.823 10:08:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:08.823 10:08:16 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:08.823 10:08:16 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:56:08.823 10:08:16 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:08.823 10:08:16 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:08.823 10:08:16 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:08.823 10:08:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:08.824 10:08:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:08.824 10:08:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:08.824 10:08:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:56:08.824 10:08:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:56:08.824 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:56:08.824 /tmp/:spdk-test:key0 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:56:08.824 10:08:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:56:08.824 10:08:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:56:08.824 /tmp/:spdk-test:key1 00:56:08.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85624 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:56:08.824 10:08:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85624 00:56:08.824 10:08:16 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85624 ']' 00:56:08.824 10:08:16 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:08.824 10:08:16 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:08.824 10:08:16 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:08.824 10:08:16 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:08.824 10:08:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:56:08.824 [2024-12-09 10:08:16.237149] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:56:08.824 [2024-12-09 10:08:16.237408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85624 ] 00:56:09.083 [2024-12-09 10:08:16.393243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:09.083 [2024-12-09 10:08:16.432398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:09.083 [2024-12-09 10:08:16.476820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:56:10.018 10:08:17 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:56:10.019 10:08:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:56:10.019 [2024-12-09 10:08:17.198269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:10.019 null0 00:56:10.019 [2024-12-09 10:08:17.230225] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:56:10.019 [2024-12-09 10:08:17.230443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:10.019 10:08:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:56:10.019 8511646 00:56:10.019 10:08:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:56:10.019 597450223 00:56:10.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:56:10.019 10:08:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85648 00:56:10.019 10:08:17 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:56:10.019 10:08:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85648 /var/tmp/bperf.sock 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85648 ']' 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:10.019 10:08:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:56:10.019 [2024-12-09 10:08:17.314996] Starting SPDK v25.01-pre git sha1 070cd5283 / DPDK 24.03.0 initialization... 00:56:10.019 [2024-12-09 10:08:17.315305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85648 ] 00:56:10.019 [2024-12-09 10:08:17.464260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:10.019 [2024-12-09 10:08:17.497257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:10.277 10:08:17 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:10.277 10:08:17 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:56:10.277 10:08:17 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:56:10.277 10:08:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:56:10.537 10:08:17 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:56:10.537 10:08:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:56:10.818 [2024-12-09 10:08:18.187799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:56:10.819 10:08:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:56:10.819 10:08:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:56:11.077 [2024-12-09 10:08:18.472488] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:56:11.077 nvme0n1 00:56:11.077 10:08:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:56:11.077 10:08:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:56:11.077 10:08:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:56:11.077 10:08:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:56:11.077 10:08:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:11.077 10:08:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:56:11.643 10:08:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:56:11.643 10:08:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:56:11.643 10:08:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:56:11.643 10:08:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:56:11.643 10:08:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:56:11.643 10:08:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:56:11.643 10:08:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:11.643 10:08:19 keyring_linux -- keyring/linux.sh@25 -- # sn=8511646 00:56:11.643 10:08:19 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:56:11.643 10:08:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:56:11.643 10:08:19 keyring_linux -- keyring/linux.sh@26 -- # [[ 8511646 == \8\5\1\1\6\4\6 ]] 00:56:11.643 10:08:19 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 8511646 00:56:11.643 10:08:19 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:56:11.644 10:08:19 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:56:11.902 Running I/O for 1 seconds... 00:56:12.837 11942.00 IOPS, 46.65 MiB/s 00:56:12.837 Latency(us) 00:56:12.837 [2024-12-09T10:08:20.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:12.837 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:56:12.837 nvme0n1 : 1.01 11944.58 46.66 0.00 0.00 10655.19 9770.82 21328.99 00:56:12.837 [2024-12-09T10:08:20.339Z] =================================================================================================================== 00:56:12.837 [2024-12-09T10:08:20.339Z] Total : 11944.58 46.66 0.00 0.00 10655.19 9770.82 21328.99 00:56:12.837 { 00:56:12.837 "results": [ 00:56:12.837 { 00:56:12.837 "job": "nvme0n1", 00:56:12.837 "core_mask": "0x2", 00:56:12.837 "workload": "randread", 00:56:12.837 "status": "finished", 00:56:12.837 "queue_depth": 128, 00:56:12.837 "io_size": 4096, 00:56:12.837 "runtime": 1.0105, 00:56:12.837 "iops": 11944.58189015339, 00:56:12.837 "mibps": 46.65852300841168, 00:56:12.837 "io_failed": 0, 00:56:12.837 "io_timeout": 0, 00:56:12.837 "avg_latency_us": 10655.187614069444, 00:56:12.837 "min_latency_us": 9770.821818181817, 00:56:12.837 "max_latency_us": 21328.98909090909 00:56:12.837 } 00:56:12.837 ], 00:56:12.837 "core_count": 1 00:56:12.837 } 00:56:12.837 10:08:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:56:12.837 10:08:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:56:13.404 10:08:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:56:13.404 10:08:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:56:13.404 10:08:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:56:13.404 10:08:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:56:13.404 10:08:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:56:13.404 10:08:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:56:13.404 10:08:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:56:13.404 10:08:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:56:13.404 10:08:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:56:13.404 10:08:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:56:13.404 10:08:20 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:56:13.404 10:08:20 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:56:13.404 10:08:20 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:56:13.404 10:08:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:13.404 10:08:20 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:56:13.404 10:08:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:13.404 10:08:20 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:56:13.404 10:08:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:56:13.663 [2024-12-09 10:08:21.144243] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:56:13.663 [2024-12-09 10:08:21.144886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25571d0 (107): Transport endpoint is not connected 00:56:13.663 [2024-12-09 10:08:21.145875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25571d0 (9): Bad file descriptor 00:56:13.663 [2024-12-09 10:08:21.146872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:56:13.663 [2024-12-09 10:08:21.146898] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:56:13.663 [2024-12-09 10:08:21.146922] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:56:13.663 [2024-12-09 10:08:21.146933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:56:13.663 request: 00:56:13.663 { 00:56:13.663 "name": "nvme0", 00:56:13.663 "trtype": "tcp", 00:56:13.663 "traddr": "127.0.0.1", 00:56:13.663 "adrfam": "ipv4", 00:56:13.663 "trsvcid": "4420", 00:56:13.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:56:13.663 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:56:13.663 "prchk_reftag": false, 00:56:13.663 "prchk_guard": false, 00:56:13.663 "hdgst": false, 00:56:13.663 "ddgst": false, 00:56:13.663 "psk": ":spdk-test:key1", 00:56:13.663 "allow_unrecognized_csi": false, 00:56:13.663 "method": "bdev_nvme_attach_controller", 00:56:13.663 "req_id": 1 00:56:13.663 } 00:56:13.663 Got JSON-RPC error response 00:56:13.663 response: 00:56:13.663 { 00:56:13.663 "code": -5, 00:56:13.663 "message": "Input/output error" 00:56:13.663 } 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@33 -- # sn=8511646 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 8511646 00:56:13.922 1 links removed 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@33 -- # sn=597450223 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 597450223 00:56:13.922 1 links removed 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85648 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85648 ']' 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85648 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85648 00:56:13.922 killing process with pid 85648 00:56:13.922 Received shutdown signal, test time was about 1.000000 seconds 00:56:13.922 00:56:13.922 Latency(us) 00:56:13.922 [2024-12-09T10:08:21.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:13.922 [2024-12-09T10:08:21.424Z] =================================================================================================================== 00:56:13.922 [2024-12-09T10:08:21.424Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85648' 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@973 -- # kill 85648 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@978 -- # wait 85648 00:56:13.922 10:08:21 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85624 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85624 ']' 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85624 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:13.922 10:08:21 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85624 00:56:14.181 killing process with pid 85624 00:56:14.181 10:08:21 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:14.181 10:08:21 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:14.181 10:08:21 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85624' 00:56:14.181 10:08:21 keyring_linux -- common/autotest_common.sh@973 -- # kill 85624 00:56:14.181 10:08:21 keyring_linux -- common/autotest_common.sh@978 -- # wait 85624 00:56:14.440 00:56:14.440 real 0m5.894s 00:56:14.440 user 0m11.679s 00:56:14.440 sys 0m1.371s 00:56:14.440 10:08:21 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:14.440 ************************************ 00:56:14.440 END TEST keyring_linux 00:56:14.440 ************************************ 00:56:14.440 10:08:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:56:14.440 10:08:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:56:14.440 10:08:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:56:14.440 10:08:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:56:14.440 10:08:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:56:14.440 10:08:21 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:56:14.440 10:08:21 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:56:14.440 10:08:21 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:56:14.440 10:08:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:14.440 10:08:21 -- common/autotest_common.sh@10 -- # set +x 00:56:14.440 10:08:21 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:56:14.440 10:08:21 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:56:14.440 10:08:21 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:56:14.440 10:08:21 -- common/autotest_common.sh@10 -- # set +x 00:56:16.341 INFO: APP EXITING 00:56:16.341 INFO: killing all VMs 00:56:16.341 INFO: killing vhost app 00:56:16.341 INFO: EXIT DONE 00:56:16.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:56:16.908 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:56:16.908 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:56:17.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:56:17.475 Cleaning 00:56:17.475 Removing: /var/run/dpdk/spdk0/config 00:56:17.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:56:17.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:56:17.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:56:17.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:56:17.476 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:56:17.737 Removing: /var/run/dpdk/spdk0/hugepage_info 00:56:17.737 Removing: /var/run/dpdk/spdk1/config 00:56:17.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:56:17.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:56:17.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:56:17.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:56:17.737 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:56:17.737 Removing: /var/run/dpdk/spdk1/hugepage_info 00:56:17.737 Removing: /var/run/dpdk/spdk2/config 00:56:17.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:56:17.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:56:17.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:56:17.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:56:17.737 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:56:17.737 Removing: /var/run/dpdk/spdk2/hugepage_info 00:56:17.737 Removing: /var/run/dpdk/spdk3/config 00:56:17.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:56:17.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:56:17.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:56:17.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:56:17.737 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:56:17.737 Removing: /var/run/dpdk/spdk3/hugepage_info 00:56:17.737 Removing: /var/run/dpdk/spdk4/config 00:56:17.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:56:17.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:56:17.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:56:17.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:56:17.737 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:56:17.737 Removing: /var/run/dpdk/spdk4/hugepage_info 00:56:17.737 Removing: /dev/shm/nvmf_trace.0 00:56:17.737 Removing: /dev/shm/spdk_tgt_trace.pid56767 00:56:17.737 Removing: /var/run/dpdk/spdk0 00:56:17.737 Removing: /var/run/dpdk/spdk1 00:56:17.737 Removing: /var/run/dpdk/spdk2 00:56:17.737 Removing: /var/run/dpdk/spdk3 00:56:17.737 Removing: /var/run/dpdk/spdk4 00:56:17.737 Removing: /var/run/dpdk/spdk_pid56620 00:56:17.737 Removing: /var/run/dpdk/spdk_pid56767 00:56:17.737 Removing: /var/run/dpdk/spdk_pid56966 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57047 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57067 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57176 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57187 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57321 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57522 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57676 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57754 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57825 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57916 00:56:17.737 Removing: /var/run/dpdk/spdk_pid57994 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58027 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58057 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58132 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58202 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58641 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58687 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58731 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58739 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58806 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58809 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58876 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58879 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58930 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58935 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58981 00:56:17.737 Removing: /var/run/dpdk/spdk_pid58999 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59129 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59165 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59247 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59574 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59590 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59622 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59636 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59657 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59676 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59689 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59705 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59724 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59743 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59754 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59777 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59791 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59812 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59831 00:56:17.737 Removing: /var/run/dpdk/spdk_pid59839 00:56:17.997 Removing: /var/run/dpdk/spdk_pid59860 00:56:17.997 Removing: /var/run/dpdk/spdk_pid59879 00:56:17.997 Removing: /var/run/dpdk/spdk_pid59898 00:56:17.997 Removing: /var/run/dpdk/spdk_pid59908 00:56:17.997 Removing: /var/run/dpdk/spdk_pid59944 00:56:17.997 Removing: /var/run/dpdk/spdk_pid59952 00:56:17.997 Removing: /var/run/dpdk/spdk_pid59987 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60059 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60082 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60097 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60120 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60135 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60137 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60185 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60193 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60229 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60233 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60248 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60252 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60267 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60271 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60281 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60290 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60320 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60346 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60356 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60384 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60394 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60401 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60442 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60454 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60483 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60490 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60498 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60505 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60513 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60520 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60532 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60535 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60619 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60667 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60784 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60813 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60858 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60872 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60889 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60909 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60940 00:56:17.997 Removing: /var/run/dpdk/spdk_pid60956 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61035 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61052 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61096 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61174 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61236 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61260 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61359 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61401 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61434 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61660 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61758 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61786 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61816 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61849 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61883 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61916 00:56:17.997 Removing: /var/run/dpdk/spdk_pid61948 00:56:17.997 Removing: /var/run/dpdk/spdk_pid62336 00:56:17.997 Removing: /var/run/dpdk/spdk_pid62376 00:56:17.997 Removing: /var/run/dpdk/spdk_pid62726 00:56:17.997 Removing: /var/run/dpdk/spdk_pid63184 00:56:17.997 Removing: /var/run/dpdk/spdk_pid63469 00:56:17.997 Removing: /var/run/dpdk/spdk_pid64314 00:56:17.997 Removing: /var/run/dpdk/spdk_pid65227 00:56:17.997 Removing: /var/run/dpdk/spdk_pid65350 00:56:17.997 Removing: /var/run/dpdk/spdk_pid65412 00:56:17.997 Removing: /var/run/dpdk/spdk_pid66827 00:56:17.997 Removing: /var/run/dpdk/spdk_pid67128 00:56:17.997 Removing: /var/run/dpdk/spdk_pid71044 00:56:17.997 Removing: /var/run/dpdk/spdk_pid71402 00:56:17.997 Removing: /var/run/dpdk/spdk_pid71511 00:56:17.997 Removing: /var/run/dpdk/spdk_pid71644 00:56:17.997 Removing: /var/run/dpdk/spdk_pid71665 00:56:17.997 Removing: /var/run/dpdk/spdk_pid71686 00:56:17.997 Removing: /var/run/dpdk/spdk_pid71715 00:56:17.997 Removing: /var/run/dpdk/spdk_pid71807 00:56:17.997 Removing: /var/run/dpdk/spdk_pid71935 00:56:17.997 Removing: /var/run/dpdk/spdk_pid72072 00:56:17.997 Removing: /var/run/dpdk/spdk_pid72146 00:56:17.997 Removing: /var/run/dpdk/spdk_pid72338 00:56:17.997 Removing: /var/run/dpdk/spdk_pid72401 00:56:17.997 Removing: /var/run/dpdk/spdk_pid72486 00:56:17.997 Removing: /var/run/dpdk/spdk_pid72847 00:56:17.997 Removing: /var/run/dpdk/spdk_pid73262 00:56:18.255 Removing: /var/run/dpdk/spdk_pid73263 00:56:18.255 Removing: /var/run/dpdk/spdk_pid73264 00:56:18.255 Removing: /var/run/dpdk/spdk_pid73532 00:56:18.255 Removing: /var/run/dpdk/spdk_pid73795 00:56:18.255 Removing: /var/run/dpdk/spdk_pid74171 00:56:18.255 Removing: /var/run/dpdk/spdk_pid74179 00:56:18.255 Removing: /var/run/dpdk/spdk_pid74508 00:56:18.255 Removing: /var/run/dpdk/spdk_pid74528 00:56:18.255 Removing: /var/run/dpdk/spdk_pid74547 00:56:18.255 Removing: /var/run/dpdk/spdk_pid74580 00:56:18.255 Removing: /var/run/dpdk/spdk_pid74585 00:56:18.255 Removing: /var/run/dpdk/spdk_pid74943 00:56:18.255 Removing: /var/run/dpdk/spdk_pid74993 00:56:18.255 Removing: /var/run/dpdk/spdk_pid75325 00:56:18.255 Removing: /var/run/dpdk/spdk_pid75515 00:56:18.255 Removing: /var/run/dpdk/spdk_pid75943 00:56:18.255 Removing: /var/run/dpdk/spdk_pid76486 00:56:18.255 Removing: /var/run/dpdk/spdk_pid77362 00:56:18.255 Removing: /var/run/dpdk/spdk_pid78003 00:56:18.255 Removing: /var/run/dpdk/spdk_pid78005 00:56:18.255 Removing: /var/run/dpdk/spdk_pid80034 00:56:18.255 Removing: /var/run/dpdk/spdk_pid80094 00:56:18.255 Removing: /var/run/dpdk/spdk_pid80147 00:56:18.255 Removing: /var/run/dpdk/spdk_pid80201 00:56:18.255 Removing: /var/run/dpdk/spdk_pid80301 00:56:18.255 Removing: /var/run/dpdk/spdk_pid80353 00:56:18.255 Removing: /var/run/dpdk/spdk_pid80401 00:56:18.255 Removing: /var/run/dpdk/spdk_pid80454 00:56:18.255 Removing: /var/run/dpdk/spdk_pid80809 00:56:18.255 Removing: /var/run/dpdk/spdk_pid82023 00:56:18.255 Removing: /var/run/dpdk/spdk_pid82165 00:56:18.255 Removing: /var/run/dpdk/spdk_pid82410 00:56:18.255 Removing: /var/run/dpdk/spdk_pid82994 00:56:18.255 Removing: /var/run/dpdk/spdk_pid83155 00:56:18.255 Removing: /var/run/dpdk/spdk_pid83312 00:56:18.255 Removing: /var/run/dpdk/spdk_pid83408 00:56:18.256 Removing: /var/run/dpdk/spdk_pid83568 00:56:18.256 Removing: /var/run/dpdk/spdk_pid83677 00:56:18.256 Removing: /var/run/dpdk/spdk_pid84378 00:56:18.256 Removing: /var/run/dpdk/spdk_pid84409 00:56:18.256 Removing: /var/run/dpdk/spdk_pid84450 00:56:18.256 Removing: /var/run/dpdk/spdk_pid84705 00:56:18.256 Removing: /var/run/dpdk/spdk_pid84742 00:56:18.256 Removing: /var/run/dpdk/spdk_pid84774 00:56:18.256 Removing: /var/run/dpdk/spdk_pid85243 00:56:18.256 Removing: /var/run/dpdk/spdk_pid85247 00:56:18.256 Removing: /var/run/dpdk/spdk_pid85502 00:56:18.256 Removing: /var/run/dpdk/spdk_pid85624 00:56:18.256 Removing: /var/run/dpdk/spdk_pid85648 00:56:18.256 Clean 00:56:18.256 10:08:25 -- common/autotest_common.sh@1453 -- # return 0 00:56:18.256 10:08:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:56:18.256 10:08:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:18.256 10:08:25 -- common/autotest_common.sh@10 -- # set +x 00:56:18.256 10:08:25 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:56:18.256 10:08:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:18.256 10:08:25 -- common/autotest_common.sh@10 -- # set +x 00:56:18.513 10:08:25 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:56:18.513 10:08:25 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:56:18.513 10:08:25 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:56:18.513 10:08:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:56:18.514 10:08:25 -- spdk/autotest.sh@398 -- # hostname 00:56:18.514 10:08:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:56:18.772 geninfo: WARNING: invalid characters removed from testname! 00:56:50.927 10:08:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:50.927 10:08:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:52.831 10:09:00 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:56.118 10:09:03 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:58.651 10:09:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:57:01.983 10:09:08 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:57:04.517 10:09:11 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:57:04.517 10:09:11 -- spdk/autorun.sh@1 -- $ timing_finish 00:57:04.517 10:09:11 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:57:04.517 10:09:11 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:57:04.517 10:09:11 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:57:04.517 10:09:11 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:57:04.517 + [[ -n 5256 ]] 00:57:04.517 + sudo kill 5256 00:57:04.527 [Pipeline] } 00:57:04.544 [Pipeline] // timeout 00:57:04.550 [Pipeline] } 00:57:04.565 [Pipeline] // stage 00:57:04.571 [Pipeline] } 00:57:04.587 [Pipeline] // catchError 00:57:04.597 [Pipeline] stage 00:57:04.599 [Pipeline] { (Stop VM) 00:57:04.613 [Pipeline] sh 00:57:04.895 + vagrant halt 00:57:08.188 ==> default: Halting domain... 00:57:14.762 [Pipeline] sh 00:57:15.043 + vagrant destroy -f 00:57:18.326 ==> default: Removing domain... 00:57:18.596 [Pipeline] sh 00:57:18.876 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:57:18.885 [Pipeline] } 00:57:18.902 [Pipeline] // stage 00:57:18.909 [Pipeline] } 00:57:18.923 [Pipeline] // dir 00:57:18.929 [Pipeline] } 00:57:18.944 [Pipeline] // wrap 00:57:18.950 [Pipeline] } 00:57:18.964 [Pipeline] // catchError 00:57:18.973 [Pipeline] stage 00:57:18.974 [Pipeline] { (Epilogue) 00:57:18.987 [Pipeline] sh 00:57:19.268 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:57:25.839 [Pipeline] catchError 00:57:25.841 [Pipeline] { 00:57:25.855 [Pipeline] sh 00:57:26.150 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:57:26.408 Artifacts sizes are good 00:57:26.417 [Pipeline] } 00:57:26.430 [Pipeline] // catchError 00:57:26.442 [Pipeline] archiveArtifacts 00:57:26.449 Archiving artifacts 00:57:26.578 [Pipeline] cleanWs 00:57:26.589 [WS-CLEANUP] Deleting project workspace... 00:57:26.589 [WS-CLEANUP] Deferred wipeout is used... 00:57:26.595 [WS-CLEANUP] done 00:57:26.596 [Pipeline] } 00:57:26.611 [Pipeline] // stage 00:57:26.617 [Pipeline] } 00:57:26.633 [Pipeline] // node 00:57:26.639 [Pipeline] End of Pipeline 00:57:26.680 Finished: SUCCESS